A GRAPHICAL JUDGMENTAL AID WHICH SUMMARIZES OBTAINED AND CHANCE RELIABILITY DATA AND HELPS ASSESS THE BELIEV ABILITY OF EXPERIMENTAL EFFECTS
- 1 December 1979
- journal article
- research article
- Published by Wiley in Journal of Applied Behavior Analysis
- Vol. 12 (4) , 523-533
- https://doi.org/10.1901/jaba.1979.12-523
Abstract
Interval by interval reliability has been criticized for “inflating” observer agreement when target behavior rates are very low or very high. Scored interval reliability and its converse, unscored interval reliability, however, vary as target behavior rates vary when observer disagreement rates are constant. These problems, along with the existence of “chance” values of each reliability which also vary as a function of response rate, may cause researchers and consumers difficulty in interpreting observer agreement measures. Because each of these reliabilities essentially compares observer disagreements to a different base, it is suggested that the disagreement rate itself be the first measure of agreement examined, and its magnitude relative to occurrence and to nonoccurrence agreements then be considered. This is easily done via a graphic presentation of the disagreement range as a bandwidth around reported rates of target behavior. Such a graphic presentation summarizes all the information collected during reliability assessments and permits visual determination of each of the three reliabilities. In addition, graphing the “chance” disagreement range around the bandwidth permits easy determination of whether or not true observer agreement has likely been demonstrated. Finally, the limits of the disagreement bandwidth help assess the believability of claimed experimental effects: those leaving no overlap between disagreement ranges are probably believable, others are not.Keywords
This publication has 9 references indexed in Scilit:
- The relevance of reliability and validity for behavioral assessmentBehavior Therapy, 1977
- A REVIEW OF THE OBSERVATIONAL DATA‐COLLECTION AND RELIABILITY PROCEDURES REPORTED IN THE JOURNAL OF APPLIED BEHAVIOR ANALYSISJournal of Applied Behavior Analysis, 1977
- ARTIFACT, BIAS, AND COMPLEXITY OF ASSESSMENT: THE ABCs OF RELIABILITYJournal of Applied Behavior Analysis, 1977
- OBSERVER AGREEMENT, CREDIBILITY, AND JUDGMENT: SOME CONSIDERATIONS IN PRESENTING OBSERVER AGREEMENT DATAJournal of Applied Behavior Analysis, 1977
- EVALUATING INTEROBSERVER RELIABILITY OF INTERVAL DATA1Journal of Applied Behavior Analysis, 1977
- REVIEWER'S COMMENT: JUST BECAUSE IT'S RELIABLE DOESN'T MEAN THAT YOU CAN USE ITJournal of Applied Behavior Analysis, 1977
- CONSIDERATIONS IN THE CHOICE OF INTEROBSERVER RELIABILITY ESTIMATESJournal of Applied Behavior Analysis, 1977
- AN EVALUATION OF TIME‐SAMPLE MEASURES OF BEHAVIOR1Journal of Applied Behavior Analysis, 1975
- A METHOD TO INTEGRATE DESCRIPTIVE AND EXPERIMENTAL FIELD STUDIES AT THE LEVEL OF DATA AND EMPIRICAL CONCEPTS1Journal of Applied Behavior Analysis, 1968