The process of rater training for observational instruments: Implications for interrater reliability
- 1 October 1990
- journal article
- research article
- Published by Wiley in Research in Nursing & Health
- Vol. 13 (5) , 311-318
- https://doi.org/10.1002/nur.4770130507
Abstract
Although the process of rater training is important for establishing interrater reliability of observational instruments, there is little information available in current literature to guide the researcher. In this article, principles and procedures that can be used when rater performance is a critical element of reliability assessment are described. Three phases of the process of rater training are presented: (a) training raters to use the instrument; (b) evaluating rater performance at the end of training; and (c) determining the extent to which rater training is maintained during a reliability study. An example is presented to illustrate how these phases were incorporated in a study to examine the reliability of a measure of patient intensity called the Patient Intensity for Nursing Index (PINI).Keywords
This publication has 11 references indexed in Scilit:
- Measuring Patient IntensityEvaluation & the Health Professions, 1989
- Pragmatic Aspects of Establishing Interrater Reliability in ResearchNursing Research, 1988
- Another look at interrater agreement.Psychological Bulletin, 1988
- Behaviorally Anchored Rating ScalesNursing Management, 1987
- Interrater ReliabilityMCN: The American Journal of Maternal/Child Nursing, 1987
- Three Estimates of Interrater Reliability for Nominal DataNursing Research, 1986
- Issues and approaches to estimating interrater reliability in nursing researchResearch in Nursing & Health, 1981
- Interobserver agreement, reliability, and generalizability of data collected in observational studies.Psychological Bulletin, 1979
- Observer Agreement and Reliabilities of Classroom Observational MeasuresReview of Educational Research, 1978