The Effect of Serial Dilution Error on Calibration Inference in Immunoassay

Abstract
A common practice in immunoassay is the use of sequential dilutions of an initial stock solution of the antigen of interest to obtain standard samples in a desired concentration range. Nonlinear, heteroscedastic regression models are a common framework for analysis, and the usual methods for fitting the model assume that measured responses on the standards are independent. However, the dilution procedure introduces a propagation of random measurement error that may invalidate this assumption. We demonstrate that failure to account for serial dilution error in calibration inference on unknown samples leads to serious inaccuracy of assessments of assay precision such as confidence intervals and precision profiles. Techniques for taking serial dilution error into account based on data from multiple assay runs are discussed and are shown to yield valid calibration inferences.

This publication has 0 references indexed in Scilit: