Approximations for Regression with Covariate Measurement Error
- 1 December 1988
- journal article
- research article
- Published by JSTOR in Journal of the American Statistical Association
- Vol. 83 (404) , 1057
- https://doi.org/10.2307/2290136
Abstract
The effect of covariate measurement error on maximum likelihood and maximum quasilikelihood estimates (MLE's or MQLE's) of regression parameters is considered. The naive estimates, obtained by ignoring error, are modified to account for it. The modified estimates can be computed explicitly from the naive ones, and have less bias for small error. They are particularly useful for Berkson error models. Examples illustrate their application to linear and nonlinear regression, errors in continuous covariates, and misclassification of discrete covariates. The estimates are applied to data relating air pollution to respiratory disease in children. Analytic methods and simulations are used to compare the estimates to others that have been proposed for dealing with covariate measurement errors. The basic idea is that the MLE or MQLE of the parameter θ is a function of the size δ of the error. When δ = 0 and there is no error, is the usual MLE or MQLE. For δ small, an improvement over is . In many cases of practical interest θ* can be computed in terms of the mean and variance of the error conditional on the erroneously observed covariates. These conditional moments may be available for Berkson error models. The advantage of using θ* is that it has less bias than . Furthermore, its use does not require either knowledge of the error distribution or solution of the likelihood or quasilikelihood equations, both of which are necessary to find . Its disadvantage is that it loses accuracy as δ becomes large. In addition, for structural error models its use requires knowledge of the covariate distribution.Keywords
This publication has 0 references indexed in Scilit: