An Examination of Strategies for Gaining Convergent Validity in Natural Experiments

Abstract
Widely disseminated programs often fail to include experimental or quasi-experimental evaluation designs to test effectiveness. This article presents several data-analyzing strategies that may help evaluators assess program effectiveness under the conditions of natural experiments. The suggested methodology is to perform multiple independent analyses to demonstrate the degree of convergence or divergence on outcomes of the D.A.R.E (Drug Abuse Resistance Education) prevention program. When used in combination, multiple analyses strengthen the estimate of program effect by minimizing potentially spurious results that may emerge due to a particular analytic strategy.