Computer Programs for Assessing Rater Agreement and Rater Bias for Qualitative Data
- 1 April 1977
- journal article
- Published by SAGE Publications in Educational and Psychological Measurement
- Vol. 37 (1) , 195-201
- https://doi.org/10.1177/001316447703700120
Abstract
The programs described compute rater agreement and rater bias statistics with qualitative data. They also utilize techniques for (a) selecting the most reliable from a set of raters and (b) identifying those cases which are the most diflicult for raters to classify.Keywords
This publication has 8 references indexed in Scilit:
- Assessing Inter-Rater Reliability for Rating Scales: Resolving some Basic IssuesThe British Journal of Psychiatry, 1976
- A Computer Program for Assessing the Reliability and Systematic Bias of Individual Measurements1Educational and Psychological Measurement, 1976
- Inter-rater Reliability of Ward Rating ScalesThe British Journal of Psychiatry, 1974
- Large sample standard errors of kappa and weighted kappa.Psychological Bulletin, 1969
- Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit.Psychological Bulletin, 1968
- The Brief Psychiatric Rating ScalePsychological Reports, 1962
- A Coefficient of Agreement for Nominal ScalesEducational and Psychological Measurement, 1960
- Note on the sampling error of the difference between correlated proportions or percentagesPsychometrika, 1947