Variability in doctors' problem-solving as measured by open-ended written patient simulations
- 1 May 1989
- journal article
- Published by Wiley in Medical Education
- Vol. 23 (3) , 270-275
- https://doi.org/10.1111/j.1365-2923.1989.tb01544.x
Abstract
The uncertain validity of written simulations could be due to the difficulty in setting criteria for optimal performance. Usually criteria are set by definition of a limited number of 'correct answers' by a panel of experts reached through an open discussion. This is an artificial situation which entails mutual influence and forces the participants to respond to the necessity to reach a consensus. In the present report we describe an attempt to set 'correct answers' by the independent performance of 15 board-certified internists on four written simulations. There was a marked variability in responses due to legitimate differences in approach, to obvious errors in interpretation of the provided data and to possible differences between the expert behaviour in a real life and in a simulated setting. We believe that the criteria for acceptable performance on written clinical simulations should be determined by independent experts, rather than by a group consensus. Students who receive after the examination a compiled list of options selected by experts in response to the same questions may obtain a more realistic insight into the complexity of clinical problem-solving.Keywords
This publication has 2 references indexed in Scilit:
- Objective measurement of clinical performanceMedical Education, 1985
- CLINICAL INSTRUCTION AND COGNITIVE DEVELOPMENT OF MEDICAL STUDENTSThe Lancet, 1982