Measuring agreement of administrative data with chart data using prevalence unadjusted and adjusted kappa
Open Access
- 21 January 2009
- journal article
- research article
- Published by Springer Nature in BMC Medical Research Methodology
- Vol. 9 (1) , 5
- https://doi.org/10.1186/1471-2288-9-5
Abstract
Kappa is commonly used when assessing the agreement of conditions with reference standard, but has been criticized for being highly dependent on the prevalence. To overcome this limitation, a prevalence-adjusted and bias-adjusted kappa (PABAK) has been developed. The purpose of this study is to demonstrate the performance of Kappa and PABAK, and assess the agreement between hospital discharge administrative data and chart review data conditions.Keywords
This publication has 32 references indexed in Scilit:
- Evaluation of agreement between conventional and liquid-based cytology in cervical cancer early detection based on analysis of 2,091 smears: Experience at the Brazilian National Cancer InstituteDiagnostic Cytopathology, 2007
- Coding of Stroke and Stroke Risk Factors Using International Classification of Diseases , Revisions 9 and 10Stroke, 2005
- Hypodensity of >1/3 Middle Cerebral Artery Territory Versus Alberta Stroke Programme Early CT Score (ASPECTS)Stroke, 2003
- Raking Kappa: Describing Potential Impact of Marginal Distributions on Measures of AgreementBiometrical Journal, 1995
- High agreement but low Kappa: I. the problems of two paradoxesJournal of Clinical Epidemiology, 1990
- Measuring nominal scale agreement among many raters.Psychological Bulletin, 1971
- Large sample standard errors of kappa and weighted kappa.Psychological Bulletin, 1969
- Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit.Psychological Bulletin, 1968
- A Coefficient of Agreement for Nominal ScalesEducational and Psychological Measurement, 1960
- Reliability of Content Analysis: The Case of Nominal Scale CodingPublic Opinion Quarterly, 1955