Interobserver Variation in Interpreting Chest Radiographs for the Diagnosis of Acute Respiratory Distress Syndrome
- 1 January 2000
- journal article
- research article
- Published by American Thoracic Society in American Journal of Respiratory and Critical Care Medicine
- Vol. 161 (1) , 85-90
- https://doi.org/10.1164/ajrccm.161.1.9809003
Abstract
To measure the reliability of chest radiographic diagnosis of acute respiratory distress syndrome (ARDS) we conducted an observer agreement study in which two of eight intensivists and a radiologist, blinded to one another's interpretation, reviewed 778 radiographs from 99 critically ill patients. One intensivist and a radiologist participated in pilot training. Raters made a global rating of the presence of ARDS on the basis of diffuse bilateral infiltrates. We assessed interobserver agreement in a pairwise fashion. For rater pairings in which one rater had not participated in the consensus process we found moderate levels of raw (0.68 to 0.80), chance-corrected ( κ 0.38 to 0.55), and chance-independent ( Φ 0.53 to 0.75) agreement. The pair of raters who participated in consensus training achieved excellent to almost perfect raw (0.88 to 0.94), chance-corrected ( κ 0.72 to 0.88), and chance-independent ( Φ 0.74 to 0.89) agreement. We conclude that intensivists without formal consensus training can achieve moderate levels of agreement. Consensus training is necessary to achieve the substantial or almost perfect levels of agreement optimal for the conduct of clinical trials. Meade MO, Cook RJ, Guyatt GH, Groll R, Kachura JR, Bedard M, Cook DJ, Slutsky AS, Stewart TE. Interobserver variation in interpreting chest radiographs for the diagnosis of acute respiratory distress syndrome.Keywords
This publication has 21 references indexed in Scilit:
- An assessment of inter-observer agreement and accuracy when reporting plain radiographsClinical Radiology, 1997
- Adult respiratory distress syndromeCritical Care Medicine, 1996
- Conditional inference for subject‐specific and marginal agreement: Two families of agreement measuresThe Canadian Journal of Statistics / La Revue Canadienne de Statistique, 1995
- Interobserver variation in the chest radiograph component of the lung injury scoreAnaesthesia, 1995
- Interobserver Variation in the Computed Tomographic Evaluation of Mediastinal Lymph Node Size in Patients With Potentially Resectable Lung CancerChest, 1995
- Interobserver agreement using computed radiography in the adult intensive care unitAcademic Radiology, 1994
- Guidelines for Reading and Interpreting Chest Radiographs in Patients Receiving Mechanical VentilationChest, 1992
- MISINTERPRETATION AND MISUSE OF THE KAPPA STATISTICAmerican Journal of Epidemiology, 1987
- Measuring nominal scale agreement among many raters.Psychological Bulletin, 1971
- Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit.Psychological Bulletin, 1968