An Alternative Method for Scoring Adaptive Tests
- 1 December 1996
- journal article
- Published by American Educational Research Association (AERA) in Journal of Educational and Behavioral Statistics
- Vol. 21 (4) , 365-389
- https://doi.org/10.3102/10769986021004365
Abstract
Modern applications of computerized adaptive testing are typically grounded in item response theory (IRT; Lord, 1980 ). While the IRT foundations of adaptive testing provide a number of approaches to adaptive test scoring that may seem natural and efficient to psychometricians, these approaches may be more demanding for test takers, test score users, and interested regulatory institutions to comprehend. An alternative method, based on more familiar equated number-correct scores and identical to that used to score and equate many conventional tests, is explored and compared with one that relies more directly on IRT. It is concluded that scoring adaptive tests using the familiar number-correct score, accompanied by the necessary equating to adjust for the intentional differences in adaptive test difficulty, is a statistically viable, although slightly less efficient, method of adaptive test scoring. To enhance the prospects for enlightened public debate about adaptive testing, it may be preferable to use this more familiar approach. Public attention would then likely be focused on issues more central to adaptive testing, namely, the adaptive nature of the test.Keywords
This publication has 17 references indexed in Scilit:
- Simulation Results of Effects on Linear and Curvilinear Observed-and True-Score Equating Procedures of Matching on a Fallible CriterionApplied Measurement in Education, 1990
- Effect on Equating Results of Matching Samples on an Anchor TestApplied Measurement in Education, 1990
- Problems Related to the Use of Conventional and Item Response Theory Equating Methods in Less Than Optimal CircumstancesApplied Psychological Measurement, 1987
- Bayes Modal Estimation in Item Response ModelsPsychometrika, 1986
- TECHNICAL GUIDELINES FOR ASSESSING COMPUTERIZED ADAPTIVE TESTSJournal of Educational Measurement, 1984
- Estimating Latent DistributionsPsychometrika, 1984
- Unbiased Estimators of Ability Parameters, of their Variance, and of their Parallel-Forms ReliabilityPsychometrika, 1983
- Reliability and Validity of Adaptive Ability Tests in a Military SettingPublished by Elsevier ,1983
- A Study of Pre-Equating Based on Item Response TheoryApplied Psychological Measurement, 1982
- A Broad-Range Tailored Test of Verbal AbilityApplied Psychological Measurement, 1977