Interpretation of Kappa and B statistics measures of agreement
- 1 February 1997
- journal article
- research article
- Published by Taylor & Francis in Journal of Applied Statistics
- Vol. 24 (1) , 105-112
- https://doi.org/10.1080/02664769723918
Abstract
The Kappa statistic proposed by Cohen and the B statistic proposed by Bangdiwala are used to quantify the agreement between two observers, independently classifying the same n units into the same k categories. Both statistics correct for the agreement expected to result from chance alone, but the Kappa statistic is a measure that adjusts the observed proportion of agreement and ranges from- pc/(1- pc) to 1, where pc is the expected agreement that results from chance, and the B statistic is a measure that adjusts the observed area of agreement with that expected to result from chance, and ranges from 0 to 1. Statistical guidelines for the interpretation of either statistic are not available. For the Kappa statistic, the suggested arbitrary interpretation given by Landis and Koch is commonly quoted. This paper compares the behavior of the Kappa statistic and the B statistic in 3 3 and 4 4 contingency tables, under different agreement patterns. Based on simulation results, non-arbitrary guidelines for the interpretation of both statistics are provided.Keywords
This publication has 0 references indexed in Scilit: