Abstract
Many professional journals publish research reports of interventions for persons with developmental disabilities which traditionally have been expected to fulfill two quite different functions. First, this research literature is the scientific data base to support the validity of recommended most promising practices. Second, these same reports are expected to be a source of information to guide the efforts of practitioners to implement those most promising practices. In a parallel fashion, the experimental method has been used both to test intervention hypotheses in research studies and as an evaluation model for practitioners in evaluating applications in typical settings. This paper explores the extent to which it is reasonable or practical to expect conventional experimental methodologies and research reports to perform this dual purpose. Recommendations are made for research and practice that require multiple perspectives and approaches better suited to a human science.