Multiple Comparisons with Control in a Single Experiment versus Separate Experiments: Why Do We Feel Differently?
- 1 May 1995
- journal article
- research article
- Published by Taylor & Francis in The American Statistician
- Vol. 49 (2) , 144-149
- https://doi.org/10.1080/00031305.1995.10476132
Abstract
We contrast comparisons of several treatments to control in a single experiment versus separate experiments in terms of Type I error rate and power. It is shown that if no Dunnett correction is applied in the single experiment case with relatively few treatments, the distribution of the number of Type I errors is not that different from what it would be in separate experiments with the same number of subjects in each treatment. The difference becomes more pronounced with a larger number of treatments. Extreme outcomes (either very few or very many rejections) are more likely when comparisons are made in a single experiment. When the total number of subjects is the same in a single versus separate experiments, power is generally higher in a single experiment even if a Dunnett adjustment is made.Keywords
This publication has 5 references indexed in Scilit:
- Multiple Comparison ProceduresWiley Series in Probability and Statistics, 1987
- Simultaneous Statistical InferencePublished by Springer Nature ,1981
- On Mixtures from Exponential FamiliesJournal of the Royal Statistical Society Series B: Statistical Methodology, 1980
- A Multiple Comparison Procedure for Comparing Several Treatments with a ControlJournal of the American Statistical Association, 1955
- Multiple Range and Multiple F TestsPublished by JSTOR ,1955