The Effects of Violating Standard Item Writing Principles on Tests and Students: The Consequences of Using Flawed Test Items on Achievement Examinations in Medical Education
- 1 June 2005
- journal article
- Published by Springer Nature in Advances in Health Sciences Education
- Vol. 10 (2) , 133-143
- https://doi.org/10.1007/s10459-004-4019-5
Abstract
The purpose of this research was to study the effects of violations of standard multiple-choice item writing principles on test characteristics, student scores, and pass–fail outcomes. Four basic science examinations, administered to year-one and year-two medical students, were randomly selected for study. Test items were classified as either standard or flawed by three independent raters, blinded to all item performance data. Flawed test questions violated one or more standard principles of effective item writing. Thirty-six to sixty-five percent of the items on the four tests were flawed. Flawed items were 0–15 percentage points more difficult than standard items measuring the same construct. Over all four examinations, 646 (53%) students passed the standard items while 575 (47%) passed the flawed items. The median passing rate difference between flawed and standard items was 3.5 percentage points, but ranged from −1 to 35 percentage points. Item flaws had little effect on test score reliability or other psychometric quality indices. Results showed that flawed multiple-choice test items, which violate well established and evidence-based principles of effective item writing, disadvantage some medical students. Item flaws introduce the systematic error of construct-irrelevant variance to assessments, thereby reducing the validity evidence for examinations and penalizing some examinees.Keywords
This publication has 10 references indexed in Scilit:
- Developing and Validating Multiple-choice Test ItemsPublished by Taylor & Francis ,2004
- Construct-irrelevant Variance and Flawed Test QuestionsAcademic Medicine, 2002
- A Review of Multiple-Choice Item-Writing Guidelines for Classroom AssessmentApplied Measurement in Education, 2002
- The Quality of In-house Medical School ExaminationsAcademic Medicine, 2002
- Cuing Effect of "All of the above" on the Reliability and Validity of Multiple-Choice Test ItemsEvaluation & the Health Professions, 1998
- Item Type and Cognitive Ability Measured: The Validity Evidence for Multiple True-False Items in Medical Specialty CertificationApplied Measurement in Education, 1995
- Type K and Other Complex Multiple‐Choice Items: An Analysis of Research and Item PropertiesEducational Measurement: Issues and Practice, 1993
- Positive and negative multiple choice items: How different are they?Studies in Educational Evaluation, 1993
- The None-of-the-Above Option: An Empirical StudyApplied Measurement in Education, 1991
- The Validity of Two Item-Writing RulesThe Journal of Experimental Education, 1991