A new method for assessing interexaminer agreement when multiple ratings are made on a single subject: applications to the assessment of neuropsychiatric symptomatology
Open Access
- 29 August 1997
- journal article
- Published by Elsevier in Psychiatry Research
- Vol. 72 (1) , 51-63
- https://doi.org/10.1016/s0165-1781(97)00095-4
Abstract
No abstract availableKeywords
This publication has 16 references indexed in Scilit:
- A computer program for assessing interexaminer agreement when multiple ratings are made on a single subjectPsychiatry Research, 1997
- Diagnosing Autism using ICD-10 criteria: A comparison of neural networks and standard multivariate proceduresChild Neuropsychology, 1995
- The Positive and Negative Syndrome Scale and the Brief Psychiatric Rating ScaleJournal of Nervous & Mental Disease, 1992
- Assessing the reliability of clinical scales when the data have both nominal and ordinal features: Proposed guidelines for neuropsychological assessmentsJournal of Clinical and Experimental Neuropsychology, 1992
- A Computer Program for Calculating Subject-by-Subject Kappa or Weighted Kappa CoefficientsEducational and Psychological Measurement, 1990
- Sample size requirements for reliability studiesStatistics in Medicine, 1987
- The Quality of Life Scale: An Instrument for Rating the Schizophrenic Deficit SyndromeSchizophrenia Bulletin, 1984
- Large sample variance of kappa in the case of different sets of raters.Psychological Bulletin, 1979
- Assessing Inter-Rater Reliability for Rating Scales: Resolving some Basic IssuesThe British Journal of Psychiatry, 1976
- Large sample standard errors of kappa and weighted kappa.Psychological Bulletin, 1969