Ten methodological lessons from the Multi-Country Evaluation of Integrated Management of Childhood Illness
Open Access
- 1 December 2005
- journal article
- research article
- Published by Oxford University Press (OUP) in Health Policy and Planning
- Vol. 20 (suppl_1) , i94-i105
- https://doi.org/10.1093/heapol/czi056
Abstract
Objective: To describe key methodological aspects of the Multi-Country Evaluation of the Integrated Management of Childhood Illness strategy (MCE-IMCI) and analyze their implications for other public health impact evaluations. Design: The MCE-IMCI evaluation designs are based on an impact model that defined expectations in the late 1990s about how IMCI would be implemented at country level and below, and the outcomes and impact it would have on child health and survival. MCE-IMCI studies include: feasibility assessments documenting IMCI implementation in 12 countries; in-depth studies using compatible designs in five countries; and cross-site analyses addressing the effectiveness of specific subsets of IMCI activities. The MCE-IMCI was designed to evaluate the impact of IMCI, and also to see that the findings from the evaluation were taken up through formal feedback sessions at national, sub-national and local levels. Results: Issues that arose early in the MCE-IMCI included: (1) defining the scope of the evaluation; (2) selecting study sites and developing research designs; (3) protecting objectivity; and (4) developing an impact model. Issues that arose mid-course included: (5) anticipating and addressing problems with external validity; (6) ensuring an appropriate time frame for the full evaluation cycle; (7) providing feedback on results to policymakers and programme implementers; and (8) modifying site-specific designs in response to early findings about the patterns and pace of programme implementation. Two critical issues could best be addressed only near the close of the evaluation: (9) factors affecting the uptake of evaluation results by policymakers and programme decision makers; and (10) the costs of the evaluation. Conclusions: Large-scale effectiveness evaluations present challenges that have not been addressed fully in the methodological literature. Although some of these challenges are context-specific, there are important lessons from the MCE that can inform future designs. Most of the issues described here are not addressed explicitly in research reports or evaluation textbooks. Describing and analyzing these experiences is one way to promote improved impact evaluations of new global health strategies.Keywords
This publication has 0 references indexed in Scilit: