Using confidence intervals in within-subject designs
- 1 December 1994
- journal article
- research article
- Published by Springer Nature in Psychonomic Bulletin & Review
- Vol. 1 (4) , 476-490
- https://doi.org/10.3758/bf03210951
Abstract
We argue that to best comprehend many data sets, plotting judiciously selected sample statistics with associated confidence intervals can usefully supplement, or even replace, standard hypothesis-testing procedures. We note that most social science statistics textbooks limit discussion of confidence intervals to their use in between-subject designs. Our central purpose in this article is to describe how to compute an analogous confidence interval that can be used in within-subject designs. This confidence interval rests on the reasoning that because between-subject variance typically plays no role in statistical analyses of within-subject designs, it can legitimately be ignored; hence, an appropriate confidence interval can be based on the standard within-subject error term—that is, on the variability due to the subject × condition interaction. Computation of such a confidence interval is simple and is embodied in Equation 2 on p. 482 of this article. This confidence interval has two useful properties. First, it is based on the same error term as is the corresponding analysis of variance, and hence leads to comparable conclusions. Second, it is related by a known factor (√2) to a confidence interval of the difference between sample means; accordingly, it can be used to infer the faith one can put in some pattern of sample means as a reflection of the underlying pattern of population means. These two properties correspond to analogous properties of the more widely used between-subject confidence interval.Keywords
This publication has 26 references indexed in Scilit:
- The Fisher, Neyman-Pearson Theories of Testing Hypotheses: One Theory or Two?Journal of the American Statistical Association, 1993
- Editorial commentMemory & Cognition, 1993
- On the Tyranny of Hypothesis Testing in the Social SciencesContemporary Psychology, 1991
- Things I have learned (so far).American Psychologist, 1990
- The test of homogeneity for 2 × 2 contingency tables: A review of and some personal opinions on the controversy.Psychological Bulletin, 1990
- Repeated-Measures Analysis of Variance in Developmental Research: Selected IssuesChild Development, 1985
- Estimation of the Box Correction for Degrees of Freedom from Sample Data in Randomized Block and Split-Plot DesignsJournal of Educational Statistics, 1976
- Conditions under Which Mean Square Ratios in Repeated Measurements Designs Have Exact F-DistributionsJournal of the American Statistical Association, 1970
- The test of significance in psychological research.Psychological Bulletin, 1966
- Design of ExperimentsBMJ, 1936