Category Distinguishability and Observer Agreement
- 1 September 1986
- journal article
- Published by Wiley in Australian Journal of Statistics
- Vol. 28 (3) , 371-388
- https://doi.org/10.1111/j.1467-842x.1986.tb00709.x
Abstract
Summary: It is common in the medical, biological, and social sciences for the categories into which an object is classified not to have a fully objective definition. Theoretically speaking the categories are therefore not completely distinguishable. The practical extent of their distinguishability can be measured when two expert observers classify the same sample of objects. It is shown, under reasonable assumptions, that the matrix of joint classification probabilities is quasi‐symmetric, and that the symmetric matrix component is non‐negative definite. The degree of distinguishability between two categories is defined and is used to give a measure of overall category distinguishability. It is argued that the kappa measure of observer agreement is unsatisfactory as a measure of overall category distinguishability.Keywords
This publication has 27 references indexed in Scilit:
- Modeling Agreement among RatersJournal of the American Statistical Association, 1985
- A general formula for the variance of Cohen's weighted kappa.Psychological Bulletin, 1978
- A review of statistical methods in the analysis of data arising from observer reliability studies (Part II)*Statistica Neerlandica, 1975
- The Equivalence of Weighted Kappa and the Intraclass Correlation Coefficient as Measures of ReliabilityEducational and Psychological Measurement, 1973
- Measuring nominal scale agreement among many raters.Psychological Bulletin, 1971
- Large sample standard errors of kappa and weighted kappa.Psychological Bulletin, 1969
- Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit.Psychological Bulletin, 1968
- A Coefficient of Agreement for Nominal ScalesEducational and Psychological Measurement, 1960