Focus on psychometrics the kappa statistic for establishing interrater reliability in the secondary analysis of qualitative clinical data
- 1 April 1992
- journal article
- Published by Wiley in Research in Nursing & Health
- Vol. 15 (2) , 153-158
- https://doi.org/10.1002/nur.4770150210
Abstract
Analysis of extant clinical records is receiving increased emphasis in nursing investigations. Appropriate use of this approach to patient research requires careful attention to data management, including assessment of reliability. Percent agreement, phi, and Kappa all serve as estimates of interrater reliability in the analysis of data. Kappa has particular merit as a measure of interrater reliability; it also has some peculiar problems in implementation and interpretation. The nature and computation of Kappa and its application in analysis of clinical data are discussed.Keywords
This publication has 27 references indexed in Scilit:
- Secondary Data AnalysisNursing Research, 1989
- Behaviorally Anchored Rating ScalesNursing Management, 1987
- Assessment of agreement among several raters formulating multiple diagnosesJournal of Psychiatric Research, 1981
- Measuring nominal scale agreement among many raters.Psychological Bulletin, 1971
- Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit.Psychological Bulletin, 1968
- CLINICAL INFERENCE IN NURSINGNursing Research, 1966
- A Coefficient of Agreement for Nominal ScalesEducational and Psychological Measurement, 1960