Abstract
A good systematic review is often likened to the pre‐flight instrument check—ensuring a plane is airworthy before take‐off. By analogy, research synthesis follows a disciplined, formalized, transparent and highly routinized sequence of steps in order that its findings can be considered trustworthy—before being launched on the policy community. The most characteristic aspect of that schedule is the appraise‐then‐analyse sequence. The research quality of the primary studies is checked out and only those deemed to be of high standard may enter the analysis, the remainder being discarded. This paper rejects this logic, arguing that the ‘study’ is not the appropriate unit of analysis for quality appraisal in research synthesis. There are often nuggets of wisdom in methodologically weak studies and systematic review disregards them at its peril. Two evaluations of youth mentoring programmes are appraised at length. A catalogue of doubts is raised about their design and analysis. Their conclusions, which incidentally run counter to each other, are highly questionable. Yet there is a great deal to be learned about the efficacy of mentoring if one digs into the specifics of each study. ‘Bad’ research may yield ‘good’ evidence—but only if the reviewer follows an approach that involves analysis and appraisal.

This publication has 7 references indexed in Scilit: