Estimation via Markov chain Monte Carlo
- 26 March 2003
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Control Systems
- Vol. 23 (2) , 34-45
- https://doi.org/10.1109/mcs.2003.1188770
Abstract
Markov chain Monte Carlo (MCMC) is a powerful means for generating random samples that can be used in computing statistical estimates and marginal and conditional probabilities. MCMC methods rely on dependent (Markov) sequences having a limiting distribution corresponding to a distribution of interest. This article is a survey of popular implementations of MCMC, focusing particularly on the two most popular specific implementations of MCMC: Metropolis-Hastings (M-H) and Gibbs sampling. Our aim is to provide the reader with some of the central motivation and the rudiments needed for a straightforward application.Keywords
This publication has 31 references indexed in Scilit:
- Bayesian estimation of state-space models using the Metropolis–Hastings algorithm within Gibbs samplingComputational Statistics & Data Analysis, 2001
- 10.1007/978-1-4899-4485-6Crossref Listing of Deleted Dois, 2000
- Stochastic sampling algorithms for state estimation of jump Markov linear systemsIEEE Transactions on Automatic Control, 2000
- Monte Carlo Methods in Bayesian ComputationPublished by Springer Nature ,2000
- Monte Carlo Statistical MethodsPublished by Springer Nature ,1999
- Use of the Kalman filter for inference in state-space models with unknown noise distributionsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1997
- The Kantorovich inequality for error analysis of the Kalman filter with unknown noise distributionsAutomatica, 1995
- On Gibbs sampling for state space modelsBiometrika, 1994
- Least-informative Bayesian prior distributions for finite samples based on information theoryIEEE Transactions on Automatic Control, 1990
- Nonlinear Bayesian estimation using Gaussian sum approximationsIEEE Transactions on Automatic Control, 1972