A Comparison of Physician Examinersʼ, Standardized Patientsʼ, and Communication Expertsʼ Ratings of International Medical Graduatesʼ English Proficiency
- 1 December 2000
- journal article
- research article
- Published by Wolters Kluwer Health in Academic Medicine
- Vol. 75 (12) , 1206-1211
- https://doi.org/10.1097/00001888-200012000-00018
Abstract
To assess the quality of ratings of interviewing skills and oral English proficiency provided on a clinical skills OSCE by physician examiners, standardized patients (SPs), and communication skills experts. In 1998, 73 candidates to the Ontario International Medical Graduate (OIMG) Program completed a 29-station OSCE-type clinical skills selection examination. Physician examiners, SPs, and communication skills experts assessed components of oral English proficiency and interview performance. Based on these results, the frequency and generalizability of English-language flags, physician examiners' indications that spoken English skills were bad enough to significantly impede communication with patients; the reliability of the OIMG's Interview and Oral Performance Scales and generalizability of overall interview and oral performance ratings; and comparisons of repeated assessments by experts were calculated. Principal-components analysis was applied to the panels' ratings to determine a more economical expression of the language proficiency and interview communication skills results. The mean number of English-language flags per candidate was 2.1, the median was 1.0, and Cronbach's alpha of the ratings was 0.63. Means, SDs, and alphas of the physician examiners' and SPs' ratings of the interview performance scale were 9.15/10, 0.43, 0.36, and 9.30/10, 0. 56, 0.50, respectively. Corresponding values for overall interview performance ratings were 3.08/4, 0.30, 0.33, and 3.34/4, 0.32, 0.47. Means, SDs, and alphas of the physician examiners' and SPs' ratings of the oral performance scale were 8.54/10, 0.74, 0.78, and 8.74/10, 1.00, 0.76. Corresponding values for overall ratings of oral performance were 3.85/5, 0.51, 0.68, and 4.08/5, 0.60, 0.68. For the two experts' ratings of two contiguous five-minute interview stations, internal consistencies were 0.88 and 0.78. For the two experts' ratings of standardized ten-minute interviews, internal consistencies were 0.81 and 0.92. Correlations between the mean values of the experts' ratings of the ten- and five-minute stations were 0.45 and 0.51. Three factors emerged from the PCA, language proficiency, physician examiners' ratings of interview proficiency, and SPs' ratings of interview proficiency. Consistency between the physician examiners' and SPs' ratings of English proficiency was observed; less agreement was observed in their ratings of interviewing skills, and little agreement was observed between the experts' ratings. Communication skills results may be validly expressed by three measures: one overall global rating of language proficiency provided by physician examiners or SPs, and overall global ratings of interview proficiency provided separately by physician examiners and SPs.Keywords
This publication has 7 references indexed in Scilit:
- Evaluating communication skills in the objective structured clinical examination format: reliability and generalizabilityMedical Education, 1996
- Evaluation of a multicenter ethics objective structured clinical examinationJournal of General Internal Medicine, 1994
- Description of observer feedback in an objective structured clinical examination and effects on examineesTeaching and Learning in Medicine, 1994
- The ethics objective structured clinical examinationJournal of General Internal Medicine, 1993
- Results of a survey on the use of standardized patients to teach and evaluate clinical skillsAcademic Medicine, 1990
- An objective measure of clinical performanceThe American Journal of Medicine, 1987
- Construct Validation of the Arizona Clinical Interview Rating ScaleEducational and Psychological Measurement, 1977