Prior information and uncertainty in inverse problems

Abstract
Solving any inverse problem requires understanding the uncertainties in the data to know what it means to fit the data. We also need methods to incorporate data‐independent prior information to eliminate unreasonable models that fit the data. Both of these issues involve subtle choices that may significantly influence the results of inverse calculations. The specification of prior information is especially controversial. How does one quantify information? What does it mean to know something about a parameter a priori? In this tutorial we discuss Bayesian and frequentist methodologies that can be used to incorporate information into inverse calculations. In particular we show that apparently conservative Bayesian choices, such as representing interval constraints by uniform probabilities (as is commonly done when using genetic algorithms, for example) may lead to artificially small uncertainties. We also describe tools from statistical decision theory that can be used to characterize the performance of inversion algorithms.

This publication has 21 references indexed in Scilit: