Towards valid measures of self-directed clinical learning
- 1 November 2003
- journal article
- website
- Published by Wiley in Medical Education
- Vol. 37 (11) , 983-991
- https://doi.org/10.1046/j.1365-2923.2003.01677.x
Abstract
Aim To compare the validity of different measures of self‐directed clinical learning. Methods We used a quasi‐experimental study design. The measures were: (1) a 23‐item quantitative instrument measuring satisfaction with the learning process and environment; (2) free text responses to 2 open questions about the quality of students' learning experiences; (3) a quantitative, self‐report measure of real patient learning, and (4) objective structured clinical examination (OSCE) and progress test results. Thirty‐three students attached to a single firm during 1 curriculum year in Phase 2 of a problem‐based medical curriculum formed an experimental group. Thirty‐one students attached to the same firm in the previous year served as historical controls and 33 students attached to other firms within the same module served as contemporary controls. After the historical control period, experimental group students were exposed to a complex curriculum intervention that set out to maximise appropriate real patient learning through increased use of the outpatient setting, briefing and supported, reflective debriefing. Results The quantitative satisfaction instrument was insensitive to the intervention. In contrast, the qualitative measure recorded a significantly increased number of positive statements about the appropriateness of real patient learning. Moreover, the quantitative self‐report measure of real patient learning found high levels of appropriate learning activity. Regarding outpatient learning, the qualitative and quantitative real patient learning instruments were again concordant and changed in the expected direction, whereas the satisfaction measure did not. An incidental finding was that, despite all attempts to achieve horizontal integration through simultaneously providing community attachments and opening up the hospital for self‐directed clinical learning, real patient learning was strongly bounded by the specialty interest of the hospital firm to which students were attached. Assessment results did not correlate with real patient learning. Conclusions Both free text responses and students' quantitative self‐reports of real patient learning were more valid than a satisfaction instrument. One explanation is that students had no benchmark against which to rate their satisfaction and curriculum change altered their tacit benchmarks. Perhaps the stronger emphasis on self‐directed learning demanded more of students and dissatisfied those who were less self‐directed. Results of objective, standardised assessments were not sensitive to the level of self‐directed, real patient learning. Despite an integrated curriculum design that set out to override disciplinary boundaries, students' learning remained strongly influenced by the specialty of their hospital firm.Keywords
This publication has 19 references indexed in Scilit:
- Self-directed, integrated clinical learning through a sign-up systemMedical Education, 2003
- Child Health and Obstetrics-Gynaecology in a problem-based learning curriculum: accepting the limits of integration and the need for differentiationMedical Education, 2002
- Challenges in educational researchMedical Education, 2002
- Student-centred Course Evaluation in a Four-year, Problem Based Medical Programme: Issues in collection and management of feedbackAssessment & Evaluation in Higher Education, 2001
- Evaluating a Clerkship Curriculum: Description and ResultsTeaching and Learning in Medicine, 2001
- An evaluation study of the didactic quality of clerkshipsMedical Education, 2000
- The adult learnerAcademic Medicine, 1999
- Course-based assessmentAcademic Medicine, 1998
- Fifteen years of experience with progress testing in a problem-based learning curriculumMedical Teacher, 1996
- Current Empirical Research on Evaluation UtilizationReview of Educational Research, 1986