Choosing optimal sampling times for therapeutic drug monitoring.
- 1 January 1985
- journal article
- Vol. 4 (1) , 84-92
Abstract
Recent advances linking pharmacokinetic theory with the selection of optimal sampling times and the influence of sampling times on the interpretation of serum drug concentrations are discussed. For serum drug assay results to be interpreted easily, samples should be obtained during the postdistributive phase (for loading and maintenance doses) or at steady state (for maintenance doses). Beyond this, the choice of sampling times depends on whether peak, trough, or intermediate concentrations are being monitored. When intrapatient and interpatient variation in pharmacokinetic values are considered to be the principal components of variability, the best time for taking a single sample after a test dose in order to predict the average concentration during the steady-state dosage interval is when sample time equals 1.4 times the population half-life of the drug. Optimal sampling times to predict maximum or minimum steady-state concentrations may differ from those used to estimate the average concentration. Efforts to decrease the number of samples needed to estimate accurately steady-state drug concentrations have led investigators to test "single-point" prediction methods following a test dose. However, for drugs administered by continuous infusion, it seems prudent to assay more than one serum sample obtained before steady state is achieved; a single sample obtained too early could contribute to inaccurate predictions. Recommendations are presented for optimal sampling times for commonly monitored drugs.This publication has 0 references indexed in Scilit: