ITEM ANALYSIS FOR TEACHER‐MADE MASTERY TESTS1
- 1 December 1974
- journal article
- Published by Wiley in Journal of Educational Measurement
- Vol. 11 (4) , 255-261
- https://doi.org/10.1111/j.1745-3984.1974.tb00997.x
Abstract
Various item selection techniques are compared on resultant criterionreferenced reliability and validity. Techniques compared include three nominal criterion‐referenced methods, a traditional point biserial selection, teacher selection, and random selection. Eighteen volunteer junior and senior high school teachers supplied behavioral objectives and item pools ranging from 26 to 40 items. Each teacher obtained reponses from four classes. Pairs of tests of various length were developed by each item selection method. Estimates of test reliability and validity were obtained using responses independent of the test construction sample. Resultant reliability and validity estimates were compared across item selection techniques. Two of the criterion‐referenced item selection methods resulted in consistently higher observed validity. However, the small magnitude of improvement over teacher or random selection raises a question as to whether the benefit warrants the necessary extra effort on the part of the classroom teacher.Keywords
This publication has 4 references indexed in Scilit:
- Comparison of Traditional Item Analysis Selection Procedures with those Recommended for Tests Designed to Measure Achievement Following Performance-Oriented InstructionPublished by American Psychological Association (APA) ,1973
- A Generalized Upper-Lower Item Discrimination IndexEducational and Psychological Measurement, 1972
- IMPLICATIONS OF CRITERION‐REFERENCED MEASUREMENT1,2Journal of Educational Measurement, 1969
- Notes on a suggested index of item validity: The U-L Index.Journal of Educational Psychology, 1951