A Computer Program for Assessing Specific Category Rater Agreement for Qualitative Data
- 1 October 1978
- journal article
- research article
- Published by SAGE Publications in Educational and Psychological Measurement
- Vol. 38 (3) , 805-813
- https://doi.org/10.1177/001316447803800322
Abstract
This program computes specific category agreement levels for both nominally and ordinally scaled data. For ordinally scaled data, an option is available for collapsing the original scale to a smaller number of categories, with the goal of improving the level of inter-rater reliability for the rating scale.Keywords
This publication has 12 references indexed in Scilit:
- Computer Programs for Assessing Rater Agreement and Rater Bias for Qualitative DataEducational and Psychological Measurement, 1977
- Reliability of Psychiatric DiagnosisArchives of General Psychiatry, 1977
- Assessing Inter-Rater Reliability for Rating Scales: Resolving some Basic IssuesThe British Journal of Psychiatry, 1976
- Inter-rater Reliability of Ward Rating ScalesThe British Journal of Psychiatry, 1974
- Constraints on the Validity of Computer DiagnosisArchives of General Psychiatry, 1974
- Large sample standard errors of kappa and weighted kappa.Psychological Bulletin, 1969
- Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit.Psychological Bulletin, 1968
- A STUDY OF PSYCHIATRIC DIAGNOSISJournal of Nervous & Mental Disease, 1964
- A Coefficient of Agreement for Nominal ScalesEducational and Psychological Measurement, 1960
- Measures of the Amount of Ecologic Association Between SpeciesEcology, 1945