A Generalization of Cohen's Kappa Agreement Measure to Interval Measurement and Multiple Raters
- 1 December 1988
- journal article
- Published by SAGE Publications in Educational and Psychological Measurement
- Vol. 48 (4) , 921-933
- https://doi.org/10.1177/0013164488484007
Abstract
Comparing different groups (e.g., cultures, age cohorts) using survey-type instruments raises the question of factorial invariance, that is, whether or not members of different groups ascribe the same meanings to survey items. This article attempts to advance multi-group research by (a) providing a concise summary of the factorial invariance problem, (b) proposing a simplified notation intended to facilitate discussion of the problem, and (c) suggesting a structured approach for testing large models. This procedure is illustrated using an extended example. Two computer programs designed to make the recommended procedures less laborious are offered.Keywords
This publication has 30 references indexed in Scilit:
- Kappa Reliabilities for Continuous Behaviors and EventsEducational and Psychological Measurement, 1985
- The Effect of Number of Rating Scale Categories on Levels of Interrater Reliability : A Monte Carlo InvestigationApplied Psychological Measurement, 1985
- Coefficient Kappa: Some Uses, Misuses, and AlternativesEducational and Psychological Measurement, 1981
- Integration and generalization of kappas for multiple raters.Psychological Bulletin, 1980
- ON THE METHODS AND THEORY OF RELIABILITYJournal of Nervous & Mental Disease, 1976
- On various intraclass correlation reliability coefficients.Psychological Bulletin, 1976
- The Intraclass Correlation Coefficient as a Measure of ReliabilityPsychological Reports, 1966
- The Measurement of Observer Disagreement in the Recording of SignsJournal of the Royal Statistical Society. Series A (General), 1966
- A Coefficient of Agreement for Nominal ScalesEducational and Psychological Measurement, 1960
- THE COMPARISON OF PERCENTAGES IN MATCHED SAMPLESBiometrika, 1950