IRT Ability Estimates from Customized Achievement Tests Without Representative Content Sampling
- 1 January 1989
- journal article
- Published by Taylor & Francis in Applied Measurement in Education
- Vol. 2 (1) , 15-35
- https://doi.org/10.1207/s15324818ame0201_2
Abstract
This study examines the effects of using item response theory (IRT) ability estimates based on customized tests that were formed by selecting specific content areas from a nationally standardized achievement test. Subsets of items were selected from four different subtests of the Iowa Tests of Basic Skills (Hieronymus, Hoover, & Lindquist, 1985) on the basis of (a) selected content areas (content-customized tests) and (b) a representative sampling of content areas (representative-customized tests). For three of the four tests examined, ability estimates and estimated national percentile ranks based on the content-customized tests in school samples tended to be systematically higher than those based on the full tests. The results of the study suggested that for certain populations, IRT ability estimates and corresponding normative scores on content-customized versions of standardized achievement tests cannot be expected to be equivalent to scores based on the full-length tests.Keywords
This publication has 6 references indexed in Scilit:
- The Effect of Deleting Content-Related Items on IRT Ability EstimatesEducational and Psychological Measurement, 1987
- The Future of Testing Is NowEducational Measurement: Issues and Practice, 1987
- Valid Normative Information From Customized Achievement TestsEducational Measurement: Issues and Practice, 1987
- The Difficulty of Test Items That Measure More Than One AbilityApplied Psychological Measurement, 1985
- Item Response TheoryPublished by Springer Nature ,1985
- Customizing a Norm-Referenced Achievement Test to Achieve Curricular Validity: A Case StudyEducational Measurement: Issues and Practice, 1984