Measurement reliability and agreement in psychiatry
- 1 June 1998
- journal article
- research article
- Published by SAGE Publications in Statistical Methods in Medical Research
- Vol. 7 (3) , 301-317
- https://doi.org/10.1177/096228029800700306
Abstract
Psychiatric research has benefited from attention to measurement theories of reliability, and reliability/agreement statistics for psychopathology ratings and diagnoses are regularly reported in empirical reports. Nevertheless, there are still controversies regarding how reliability should be measured, and the amount of resources that should be spent on studying measurement quality in research programs. These issues are discussed in the context of recent theoretical and technical contributions to the statistical analysis of reliability. Special attention is paid to statistical studies published since Kraemer's 1992 review of reliability methods in this journal.Keywords
This publication has 69 references indexed in Scilit:
- Interrater reliability coefficients cannot be computed when only one stimulus is rated.Journal of Applied Psychology, 1989
- Quantification of Agreement in Psychiatric Diagnosis RevisitedArchives of General Psychiatry, 1987
- Psychiatric diagnosis: Are clinicians still necessary?Comprehensive Psychiatry, 1983
- Reliability Studies of Psychiatric DiagnosisArchives of General Psychiatry, 1981
- Reliability and Validity in Binary RatingsArchives of General Psychiatry, 1978
- Coefficients of Agreement Between Observers and Their InterpretationThe British Journal of Psychiatry, 1977
- ON THE METHODS AND THEORY OF RELIABILITYJournal of Nervous & Mental Disease, 1976
- Estimating the Reliability of Interview DataPsychometrika, 1970
- Errors of Measurement in StatisticsTechnometrics, 1968
- A Coefficient of Agreement for Nominal ScalesEducational and Psychological Measurement, 1960