The Number of Patients and Events Required to Limit the Risk of Overestimation of Intervention Effects in Meta-Analysis—A Simulation Study
Top Cited Papers
Open Access
- 18 October 2011
- journal article
- research article
- Published by Public Library of Science (PLoS) in PLOS ONE
- Vol. 6 (10) , e25491
- https://doi.org/10.1371/journal.pone.0025491
Abstract
Meta-analyses including a limited number of patients and events are prone to yield overestimated intervention effect estimates. While many assume bias is the cause of overestimation, theoretical considerations suggest that random error may be an equal or more frequent cause. The independent impact of random error on meta-analyzed intervention effects has not previously been explored. It has been suggested that surpassing the optimal information size (i.e., the required meta-analysis sample size) provides sufficient protection against overestimation due to random error, but this claim has not yet been validated. We simulated a comprehensive array of meta-analysis scenarios where no intervention effect existed (i.e., relative risk reduction (RRR) = 0%) or where a small but possibly unimportant effect existed (RRR = 10%). We constructed different scenarios by varying the control group risk, the degree of heterogeneity, and the distribution of trial sample sizes. For each scenario, we calculated the probability of observing overestimates of RRR>20% and RRR>30% for each cumulative 500 patients and 50 events. We calculated the cumulative number of patients and events required to reduce the probability of overestimation of intervention effect to 10%, 5%, and 1%. We calculated the optimal information size for each of the simulated scenarios and explored whether meta-analyses that surpassed their optimal information size had sufficient protection against overestimation of intervention effects due to random error. The risk of overestimation of intervention effects was usually high when the number of patients and events was small and this risk decreased exponentially over time as the number of patients and events increased. The number of patients and events required to limit the risk of overestimation depended considerably on the underlying simulation settings. Surpassing the optimal information size generally provided sufficient protection against overestimation. Random errors are a frequent cause of overestimation of intervention effects in meta-analyses. Surpassing the optimal information size will provide sufficient protection against overestimation.Keywords
This publication has 24 references indexed in Scilit:
- GRADE guidelines 6. Rating the quality of evidence—imprecisionJournal of Clinical Epidemiology, 2011
- Sequential methods for random‐effects meta‐analysisStatistics in Medicine, 2010
- Sequential meta-analysis: an efficient decision-making toolClinical Trials, 2010
- Estimating required information size by quantifying diversity in random-effects model meta-analysesBMC Medical Research Methodology, 2009
- GRADE: an emerging consensus on rating quality of evidence and strength of recommendationsBMJ, 2008
- Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: meta-epidemiological studyBMJ, 2008
- Simple Heterogeneity Variance Estimation for Meta-AnalysisJournal of the Royal Statistical Society Series C: Applied Statistics, 2005
- Effect sizes in cumulative meta-analyses of mental health randomized trials evolved over timeJournal of Clinical Epidemiology, 2004
- Cumulative meta-analysis of clinical trials builds evidence for exemplary medical careJournal of Clinical Epidemiology, 1995
- Meta-analysis in clinical trialsControlled Clinical Trials, 1986