Characterizing Measurement Error in Scores Across Studies: Some Recommendations for Conducting “Reliability Generalization” Studies
- 1 July 2002
- journal article
- research article
- Published by Taylor & Francis in Measurement and Evaluation in Counseling and Development
- Vol. 35 (2) , 113-127
- https://doi.org/10.1080/07481756.2002.12069054
Abstract
T. Vacha-Haase (1998) proposed her “reliability generalization” methodology to characterize (a) typical score reliability for a measure across studies, (b) the variability of score reliabilities, and (c) what measurement protocol features predict the variability in score reliabilities across administrations. The present article provides recommendations on how to conduct these studies.Keywords
This publication has 32 references indexed in Scilit:
- Confidence Intervals for Effect SizesEducational and Psychological Measurement, 2001
- Use of Structure Coefficients in Published Multiple Regression Articles: β is not EnoughEducational and Psychological Measurement, 2001
- Reliability Generalization: Exploring Variation of Reliability Coefficients of MMPI Clinical Scales ScoresEducational and Psychological Measurement, 2001
- Measurement Error in “Big Five Factors” Personality Assessment: Reliability Generalization across Studies and MeasuresEducational and Psychological Measurement, 2000
- Psychometrics versus Datametrics: Comment on Vacha-Haase’s “Reliability Generalization” Method and Some Epm Editorial PoliciesEducational and Psychological Measurement, 2000
- Psychometrics is Datametrics: the Test is not ReliableEducational and Psychological Measurement, 2000
- How Well Do Researchers Report their Measures? an Evaluation of Measurementin Published Educational ResearchEducational and Psychological Measurement, 1998
- Book ReviewsEducational and Psychological Measurement, 1991
- A random effects model for effect sizes.Psychological Bulletin, 1983
- Multiple regression as a general data-analytic system.Psychological Bulletin, 1968