Bayesian Inference Using Intervals of Measures
Open Access
- 1 March 1981
- journal article
- Published by Institute of Mathematical Statistics in The Annals of Statistics
- Vol. 9 (2) , 235-244
- https://doi.org/10.1214/aos/1176345391
Abstract
Partial prior knowledge is quantified by an interval $I(L, U)$ of $\sigma$-finite prior measures $Q$ satisfying $L(A) \leq Q(A) \leq U(A)$ for all measurable sets $A$, and is interpreted as acceptance of a family of bets. The concept of conditional probability distributions is generalized to that of conditional measures, and Bayes theorem is extended to accommodate unbounded priors. According to Bayes theorem, the interval $I(L, U)$ of prior measures is transformed upon observing $X$ into a similar interval $I(L_x, U_x)$ of posterior measures. Upper and lower expectations and variances induced by such intervals of measures are obtained. Under weak regularity conditions, as the amount of data increases, these upper and lower posterior expectations are strongly consistent estimators. The range of posterior expectations of an arbitrary function $b$ on the parameter space is asymptotically $b_N \pm \alpha\sigma_N + o(\sigma_N)$ where $b_N$ and $\sigma^2_N$ are the posterior mean and variance of $b$ induced by the upper prior measure $U$, and where $\alpha$ is a constant determined by the density of $L$ with respect to $U$ reflecting the uncertainty about the prior.
Keywords
This publication has 0 references indexed in Scilit: