Sifting Statistical Significance From the Artifact of Regression- Discontinuity Design
- 1 April 1990
- journal article
- research article
- Published by SAGE Publications in Evaluation Review
- Vol. 14 (2) , 166-181
- https://doi.org/10.1177/0193841x9001400204
Abstract
When the covariate of an evaluation study using regression analysis is falhbly measured, the statistical test of program effectiveness is biased. In programs where the target population is "below average, " the bias tends to suppress the beneficial effects of the program, while raising them in programs designed for those "above average. " The magnitude of this bias is calculated, and a correction method is derived and illustrated. In many cases, these biases will make practical differences in program assessment, lending further support to the general distrust of nonexperimental methods. Nonetheless, our approach improves the reliability of nonexperimen tal methods by correcting one potential source of their bias.Keywords
This publication has 23 references indexed in Scilit:
- Forecasting from fallible data: Correcting prediction bias with stein‐rule least squaresJournal of Forecasting, 1988
- Do We Need Experimental Data To Evaluate the Impact of Manpower Training On Earnings?Evaluation Review, 1987
- Economic Evaluation of the Comprehensive Employment and Training ActEvaluation Review, 1987
- Response Bias and Reliability in Sensitive Topic SurveysJournal of the American Statistical Association, 1986
- The Seattle Automobile Inspection and Maintenance ProgramEvaluation Review, 1986
- Regulatory Strategies for Workplace Injury ReductionEvaluation Review, 1985
- Regression Estimation after Correcting for AttenuationJournal of the American Statistical Association, 1978
- On the psychology of prediction.Psychological Review, 1973
- Reforms as experiments.American Psychologist, 1969
- Large-Sample Covariance Analysis when the Control Variable is FallibleJournal of the American Statistical Association, 1960