Computation and Interpretation of Multiple Regressions
- 1 January 1951
- journal article
- research article
- Published by Oxford University Press (OUP) in Journal of the Royal Statistical Society Series B: Statistical Methodology
- Vol. 13 (1) , 100-119
- https://doi.org/10.1111/j.2517-6161.1951.tb00074.x
Abstract
Summary: Full practical directions are given for the expeditious calculation of regression equations, which are built up by successive insertion of additional variates. The computer can readily measure the contribution made by each new variate to the goodness of fit, omit variates which do not make a significant contribution, and select that group of independent variates which together give the lowest residual mean square in the dependent variate. The statistical properties of the regression equation, and of the successive reciprocal matrices from which it is derived, are discussed in detail, with special reference to the proper interpretation of the equation and its associated tests of significance. In dealing with experimental or observational data it is rarely necessary to work to more than four or at most five significant figures.This publication has 5 references indexed in Scilit:
- Enlargement Methods for Computing the Inverse MatrixThe Annals of Mathematical Statistics, 1946
- THE PRECISION OF DISCRIMINANT FUNCTIONSAnnals of Eugenics, 1940
- The Omission or Addition of an Independent Variate in Multiple Linear RegressionJournal of the Royal Statistical Society Series B: Statistical Methodology, 1938
- Orthogonal Functions and Tests of Significance in the Analysis of VarianceJournal of the Royal Statistical Society Series B: Statistical Methodology, 1938
- Elementary MatricesPublished by Cambridge University Press (CUP) ,1938