Assessing the quality of research
Top Cited Papers
- 1 January 2004
- Vol. 328 (7430) , 39-41
- https://doi.org/10.1136/bmj.328.7430.39
Abstract
Inflexible use of evidence hierarchies confuses practitioners and irritates researchers. So how can we improve the way we assess research? The widespread use of hierarchies of evidence that grade research studies according to their quality has helped to raise awareness that some forms of evidence are more trustworthy than others. This is clearly desirable. However, the simplifications involved in creating and applying hierarchies have also led to misconceptions and abuses. In particular, criteria designed to guide inferences about the main effects of treatment have been uncritically applied to questions about aetiology, diagnosis, prognosis, or adverse effects. So should we assess evidence the way Michelin guides assess hotels and restaurants? We believe five issues should be considered in any revision or alternative approach to helping practitioners to find reliable answers to important clinical questions. Ever since two American social scientists introduced the concept in the early 1960s,1 hierarchies have been used almost exclusively to determine the effects of interventions. This initial focus was appropriate but has also engendered confusion. Although interventions are central to clinical decision making, practice relies on answers to a wide variety of types of clinical questions, not just the effect of interventions.2 Other hierarchies might be necessary to answer questions about aetiology, diagnosis, disease frequency, prognosis, and adverse effects.3 Thus, although a systematic review of randomised trials would be appropriate for answering questions about the main effects of a treatment, it would be ludicrous to attempt to use it to ascertain the relative accuracy of computerised versus human reading of cervical smears, the natural course of prion diseases in humans, the effect of carriership of a mutation on the risk of venous thrombosis, or the rate of vaginal adenocarcinoma in the daughters of pregnant women given diethylstilboesterol.4 ![][1] To answer their everyday questions, practitioners … [1]: /embed/graphic-1.gifKeywords
This publication has 18 references indexed in Scilit:
- Randomisation to protect against selection bias in healthcare trialsPublished by Wiley ,2007
- Letters, numbers, symbols and words: how to communicate grades of evidence and recommendations.2003
- Anecdotes as evidenceBMJ, 2003
- Acceptance and Compliance with External Hip Protectors: A Systematic Review of the LiteratureOsteoporosis International, 2002
- Statistical methods for assessing the influence of study characteristics on treatment effects in ‘meta‐epidemiological’ researchStatistics in Medicine, 2002
- Users' Guides to the Medical LiteratureJAMA, 2000
- Oral contraception and healthBMJ, 1999
- DiscussionJournal of Clinical Epidemiology, 1998
- An evidence based approach to individualising treatmentBMJ, 1995
- Validity of anecdotal reports of suspected adverse drug reactions: the problem of false alarmsBMJ, 1982