Robustness of multiple testing procedures against dependence
Open Access
- 1 February 2009
- journal article
- Published by Institute of Mathematical Statistics in The Annals of Statistics
- Vol. 37 (1) , 332-358
- https://doi.org/10.1214/07-aos557
Abstract
An important aspect of multiple hypothesis testing is controlling the significance level, or the level of Type I error. When the test statistics are not independent it can be particularly challenging to deal with this problem, without resorting to very conservative procedures. In this paper we show that, in the context of contemporary multiple testing problems, where the number of tests is often very large, the difficulties caused by dependence are less serious than in classical cases. This is particularly true when the null distributions of test statistics are relatively light-tailed, for example, when they can be based on Normal or Student’s t approximations. There, if the test statistics can fairly be viewed as being generated by a linear process, an analysis founded on the incorrect assumption of independence is asymptotically correct as the number of hypotheses diverges. In particular, the point process representing the null distribution of the indices at which statistically significant test results occur is approximately Poisson, just as in the case of independence. The Poisson process also has the same mean as in the independence case, and of course exhibits no clustering of false discoveries. However, this result can fail if the null distributions are particularly heavy-tailed. There clusters of statistically significant results can occur, even when the null hypothesis is correct. We give an intuitive explanation for these disparate properties in light- and heavy-tailed cases, and provide rigorous theory underpinning the intuition.Keywords
All Related Versions
This publication has 39 references indexed in Scilit:
- Non‐parametric Estimation of Tail DependenceScandinavian Journal of Statistics, 2006
- False discovery and false nondiscovery rates in single-step multiple testing proceduresThe Annals of Statistics, 2006
- Multiple Hypothesis Testing in Microarray ExperimentsStatistical Science, 2003
- Controlling the rate of Type I error over a large set of statistical testsBritish Journal of Mathematical and Statistical Psychology, 2002
- The control of the false discovery rate in multiple testing under dependencyThe Annals of Statistics, 2001
- Basic concepts of multiple tests — A surveyStatistical Papers, 2000
- Asymptotic comparison of step-down and step-up multiple test procedures based on exchangeable test statisticsThe Annals of Statistics, 1998
- The Simes Method for Multiple Hypothesis Testing With Positively Dependent Test StatisticsJournal of the American Statistical Association, 1997
- Statistical Problems in the Reporting of Clinical TrialsNew England Journal of Medicine, 1987
- Comparing the Means of Several GroupsNew England Journal of Medicine, 1985