Why certain systematic reviews reach uncertain conclusions
Open Access
- 5 April 2003
- Vol. 326 (7392) , 756-758
- https://doi.org/10.1136/bmj.326.7392.756
Abstract
Sound systematic reviews may not guide practice In public health there are few trials to review and indeed few other types of outcome assessment.3 Unsurprisingly, research users often regard reviews of such a limited evidence base as unhelpful and find their conclusions confusing and frustrating.4 This is ironic, given that systematic reviews are intended (among other things) to reduce uncertainty (box 1). Systematic reviews are certainly capable of doing this, and there are many well known clinical examples.9 Examples from other fields relevant to public health include two reviews that examined the effectiveness of improved street lighting and closed circuit television as deterrents to crime. 10 11 These reviews included a total of 35 studies and found that although closed circuit television reduced crime in car parks, it had little effect in city centres or when used on public transport.11 Improved street lighting, however, reduced crime by up to a fifth, and savings outweighed the installation costs.10 Box 1 : Systematic reviews and uncertainty “Systematic reviews aim to reduce uncertainty by strengthening the evidence base”5 “Systematic reviews … contribute to resolve uncertainty when original research, reviews, and editorials disagree”6 “Systematic reviews can be conducted in an effort to resolve conflicting evidence, to answer questions where the answer is uncertain or to explain variations in practice”7 “Systematic reviews are needed to inform policy and decision-making about the organisation and delivery of health and social care. They are particularly useful when there is uncertainty regarding the potential benefits or harm of an intervention”8 RETURN TO TEXT Equally common, however, are reviews that go to extreme lengths to seek out the best evidence, only to conclude that “good evidence is currently lacking.” Although this may be an accurate representation of the state of the evidence, it is not useful for guiding practice or policy, and users and funders will not see value in reviews that consistently and predictably conclude that no good evidence exists. Systematic reviews also risk being perceived, quite wrongly, as simply a means of criticising existing research rather than informing decision making. Worse, their positive messages may be overlooked, and they will be seen as the public health version of Cassandra, the classical bearer of bad news who was doomed never to be believed.Keywords
This publication has 8 references indexed in Scilit:
- Evidence-based public health practice: improving the quality and quantity of the evidenceJournal of Public Health, 2002
- Evidence based policy: proceed with care Commentary: research must be taken seriouslyBMJ, 2001
- Evidence‐based medicine in nephrology: identifying and critically appraising the literatureNephrology Dialysis Transplantation, 2000
- Should journals publish systematic reviews that find no evidence to guide practice? Examples from injury researchBMJ, 2000
- Randomised studies of income supplementation: a lost opportunity to assess health outcomesJournal of Epidemiology and Community Health, 1999
- Quality-assessed reviews of health care interventions and the database of abstracts of reviews of effectiveness (DARE). NHS CRD Review, Dissemination, and Information Teams.1999
- Evaluations of road accident blackspot treatment: A case of the iron law of evaluation studies?Accident Analysis & Prevention, 1997
- Systematic Reviews: Rationale for systematic reviewsBMJ, 1994