Evaluating interventions with differential attrition: The importance of nonresponse mechanisms and use of follow-up data.

Abstract
Evaluations of psychological interventions are often criticized because of differential attrition, which is cited as a severe threat to validity. The present study shows that differential attrition is not a problem unless the mechanism causing the attrition is inaccessible (unavailable for analysis). With a simulation study, we show that conclusions about program effects (a) are unbiased when there is no differential attrition, even with usual complete cases analysis; (b) may be severely biased when based on usual complete cases analyses and there is differential attrition; (c) are unbiased when based on the expectation-maximization (EM) algorithm, even when there is differential attrition, as long as the attrition mechanism is accessible; and (d) are biased, even with the EM algorithm, when the attrition mechanism is inaccessible. Following Little and Rubin (1987), we advocate the collection of new data from a random sample of subjects with initially missing data. On the basis of these data, we propose a simple correction to the EM algorithm estimates. In our study, the correction produced unbiased estimates of program effects parameters, even with an inaccessible attrition mechanism and substantial differential attrition.

This publication has 0 references indexed in Scilit: