Approximate Bayesian Computation: a non-parametric perspective
Abstract
Approximate Bayesian Computation is a family of likelihood-free inference techniques that are well-suited to models defined in terms of a stochastic generating mechanism. In a nutshell, Approximate Bayesian Computation proceeds by computing summary statistics ${\bf s}_{obs}$ from the data and simulating synthetic summary statistics for different values of the parameter $\Theta$. The posterior distribution is then approximated with an estimator of the conditional density $g(\Theta|{\bf s}_{obs})$. In this paper, we derive the asymptotic bias and variance of the standard estimators of the posterior distribution which are based on rejection sampling and linear adjustment. Additionally, we introduce an original estimator of the posterior distribution based on quadratic adjustment and we show that its bias contains a smaller number of terms than the estimator with linear adjustment. Although we find that the estimators with adjustment are not universally superior to the estimator based on rejection sampling, we find that they can achieve better performance when there is a nearly homoscedastic relationship between the summary statistics and the parameter of interest. Last, we present model selection in Approximate Bayesian Computation. As for parameter estimation, the asymptotic results raise the importance of the curse of dimensionality in Approximate Bayesian Computation.
Keywords
All Related Versions
This publication has 0 references indexed in Scilit: