Confidence intervals for the interrater agreement measure kappa
- 1 January 1987
- journal article
- other
- Published by Taylor & Francis in Communications in Statistics - Theory and Methods
- Vol. 16 (4) , 953-968
- https://doi.org/10.1080/03610928708829415
Abstract
The asympotic normal approximation to the distribution of the estimated measure [kcirc] for evaluating agreement between two raters has been shown to perform poorly for small sample sizes when the true kappa is nonzero. This paper examines the use of skewness corrections and transformations of [kcirc] on the attained confidence levels. Small sample simulations demonstrate the improvement in the agreement between the desired and actual levels of confidence intervals and hypothesis tests that incorporate these corrections.Keywords
This publication has 8 references indexed in Scilit:
- Reliability models for categorical dataCommunications in Statistics - Theory and Methods, 1984
- Analysis of Nonagreements among Multiple RatersPublished by JSTOR ,1983
- Measuring Agreement for Multinomial DataPublished by JSTOR ,1982
- Inference About Weighted Kappa in the Non-Null CaseApplied Psychological Measurement, 1978
- A One-Way Components of Variance Model for Categorical DataBiometrics, 1977
- Comparison of the Null Distributions of Weighted Kappa and the C Ordinal StatisticApplied Psychological Measurement, 1977
- Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit.Psychological Bulletin, 1968
- A Trustworthy JackknifeThe Annals of Mathematical Statistics, 1964