AIMS Ratings – Repeatability
- 1 May 1988
- journal article
- research article
- Published by Royal College of Psychiatrists in The British Journal of Psychiatry
- Vol. 152 (5) , 670-673
- https://doi.org/10.1192/bjp.152.5.670
Abstract
In the present study, two raters, a psychologist and a nurse, each made five independent ratings of 30 different video-recorded patient-examinations. Having thus excluded patient fluctuation, individual-rater consistency and between-rater agreement over the 6 weeks of the study are examined. While between-rater agreement was apparently being maintained, mean AIMS scores steadily increased. In the hands of these raters, AIMS items 2 and 4 emerged as very reliable, while items 1, 6, and 7 showed high variability. Some patients appeared to be hard to rate. Differences between the study raters and the author JB highlight the issue: how reproducible is an AIMS rating?This publication has 3 references indexed in Scilit:
- Confidence intervals for the interrater agreement measure kappaCommunications in Statistics - Theory and Methods, 1987
- Tardive Dyskinesia: Fluctuating Patient or Fluctuating RaterThe British Journal of Psychiatry, 1984
- Research Diagnoses for Tardive DyskinesiaArchives of General Psychiatry, 1982