Abstract
Our increased dependence on complex models for engineering design, coupled with our decreased dependence on experimental observation, leads to the question: How does one know that a model is valid? As models become more complex (i.e., multiphysics models), the ability to test models over the full range of possible applications becomes more difficult. This difficulty is compounded by the uncertainty that is invariably present in the experimental data used to test the model; the uncertainties in the parameters that are incorporated into the model; and the uncertainties in the model structure itself. Here, the issues associated with model validation are discussed and methodology is presented to incorporate measurement and model parameter uncertainty in a metric for model validation through a weighted r2 norm. The methodology is based on first-order sensitivity analysis coupled with the use of statistical models for uncertainty. The result of this methodology is compared to results obtained from the more computationally expensive Monte Carlo method. The methodology was demonstrated for the nonlinear Burgers’ equation, the convective-dispersive equation, and for conduction heat transfer with contact resistance. Simulated experimental data was used for the first two cases, and true experimental data was used for the third. The results from the sensitivity analysis approach compared well with those for the Monte Carlo method. The results show that the metric presented can discriminate between valid and invalid models. The metric has the advantage that it can be applied to multivariate, correlated data.