Using Item Test-Retest Stability (ITRS) as a Criterion for Item Selection: an Empirical Study
- 1 December 1977
- journal article
- research article
- Published by SAGE Publications in Educational and Psychological Measurement
- Vol. 37 (4) , 847-852
- https://doi.org/10.1177/001316447703700406
Abstract
This paper proposes another criterion for empirical item-selection, namely, item test-retest stability (ITRS). Four tests were used in this research. Two of them were ability power tests and the other two were objective personality scales. These tests were administered twice to 100 students with a time interval of eight months in between. For each item in each test a phi correlation between the first and second administrations was calculated and then used as an ITRS index. For each test an abbreviated version was created, by selecting the items with the highest ITRS scores. The retest stability coefficients of the abbreviated and original tests were assessed with a new sample, consisting of another 100 students, and the abbreviated tests were found to be not lower than the respective coefficients of the original, longer tests. The ITRS and the more popular IIC (Item internal consistency) criterion were compared to each other in terms of their effects upon retest stability and internal consistency, and conclusions were drawn.Keywords
This publication has 2 references indexed in Scilit:
- Demonstration of Cross-Cultural Invariance of the California Psychological Inventory in America and Israel by the Guttman-Lingoes Smallest Space AnalysisJournal of Cross-Cultural Psychology, 1970
- Interrelationships Among Personality Scale Parameters: Item Response Stability and Scale ReliabilityEducational and Psychological Measurement, 1967