Capitalization on Item Calibration Error in Adaptive Testing
- 1 January 2000
- journal article
- Published by Taylor & Francis in Applied Measurement in Education
- Vol. 13 (1) , 35-53
- https://doi.org/10.1207/s15324818ame1301_2
Abstract
In adaptive testing, item selection is sequentially optimized during the test. Because the optimization takes place over a pool of items calibrated with estimation error, capitalization on chance is likely to occur. How serious the consequences of this phenomenon are depends not only on the distribution of the estimation errors in the pool or the conditional ratio of the test length to the pool size given ability, but may also depend on the structure of the item selection criterion used. A simulation study demonstrated a dramatic impact of capitalization on estimation errors on ability estimation. Four different strategies to minimize the likelihood of capitalization on error in computerized adaptive testing are discussed.Keywords
This publication has 13 references indexed in Scilit:
- Item exposure control in CAT-ASVAB.Published by American Psychological Association (APA) ,1997
- A Global Information Approach to Computerized Adaptive TestingApplied Psychological Measurement, 1996
- Item Parameter Estimation Errors and Their Influence on Test Information FunctionsApplied Measurement in Education, 1994
- Optimal Sequential Designs for On-line Item EstimationPsychometrika, 1994
- A Method for Severely Constrained Item Selection in Adaptive TestingApplied Psychological Measurement, 1993
- Influence of Item Parameter Estimation Errors in Test DevelopmentJournal of Educational Measurement, 1993
- An Empirical Bayesian Approach to Item BankingApplied Psychological Measurement, 1986
- Item Response TheoryPublished by Springer Nature ,1985
- Small N Justifies Rasch ModelPublished by Elsevier ,1983
- A Bayesian Sequential Procedure for Quantal Response in the Context of Adaptive Mental TestingJournal of the American Statistical Association, 1975