Clinical work sampling
- 1 August 2000
- journal article
- research article
- Published by Springer Nature in Journal of General Internal Medicine
- Vol. 15 (8) , 556-561
- https://doi.org/10.1046/j.1525-1497.2000.06099.x
Abstract
OBJECTIVE: Existing systems of in-training evaluation (ITE) have been criticized as being unreliable and invalid methods for assessing student performance during clinical education. The purpose of this study was to assess the feasibility, reliability, and validity of a clinical work sampling (CWS) approach to ITE. This approach focused on the following: (1) basing performance data on observed behaviors, (2) using multiple observers and occasions, (3) recording data at the time of performance, and (4) allowing for a feasible system to receive feedback. PARTICIPANTS: Sixty-two third-year University of Ottawa students were assessed during their 8-week internal medicine inpatient experience. MEASUREMENTS AND MAIN RESULTS: Four performance rating forms (Admission Rating Form, Ward Rating Form, Multidisciplinary Team Rating Form, and Patient’s Rating Form) were introduced to document student performance. Voluntary participation rates were variable (12%–64%) with patients excluded from the analysis because of low response rate (12%). The mean number of evaluations per student per rotation (19) exceeded the number of evaluations needed to achieve sufficient reliability. Reliability coefficients were high for the Ward Form (.86) and the Admission Form (.73) but not for the Multidisciplinary Team (.22) Form. There was an examiner effect (rater leniency), but this was small relative to real differences between students. Correlations between the Ward Form and the Admission Form were high (.47), while those with the Multidisciplinary Team Form were lower (.37 and .26, respectively). The CWS approach ITE was considered to be content valid by expert judges. CONCLUSIONS: The collection of ongoing performance data was reasonably feasible, reliable, and valid.Keywords
This publication has 20 references indexed in Scilit:
- Global rating scales in residency educationAcademic Medicine, 1996
- Assessing clinical performance. Where do we stand and what might we expect?JAMA, 1995
- Validity of three clinical performance assessments of internal medicine clerksAcademic Medicine, 1995
- Evaluation of the noncognitive professional traits of medical studentsAcademic Medicine, 1993
- The importance of strong evaluation standards and procedures in training residentsAcademic Medicine, 1993
- Functional and dysfunctional characteristics of the prevailing model of clinical evaluation systems in North American medical schoolsAcademic Medicine, 1992
- Positive effects of a clinical performance assessment programAcademic Medicine, 1991
- Pitfalls in the pursuit of objectivity: issues of reliabilityMedical Education, 1991
- The legal context for evaluating and dismissing medical students and residentsAcademic Medicine, 1989
- Dimensionality, internal consistency and interrater reliability of clinical performance ratingsMedical Education, 1987