Conditioning in Markov Chain Monte Carlo
- 1 June 1995
- journal article
- research article
- Published by Taylor & Francis in Journal of Computational and Graphical Statistics
- Vol. 4 (2) , 148-154
- https://doi.org/10.1080/10618600.1995.10474672
Abstract
The so-called “Rao-Blackwellized” estimators proposed by Gelfand and Smith do not always reduce variance in Markov chain Monte Carlo when the dependence in the Markov chain is taken into account. An illustrative example is given, and a theorem characterizing the necessary and sufficient condition for such an estimator to always reduce variance is proved.Keywords
This publication has 9 references indexed in Scilit:
- Covariance structure of the Gibbs sampler with applications to the comparisons of estimators and augmentation schemesBiometrika, 1994
- Spatial Statistics and Bayesian ComputationJournal of the Royal Statistical Society Series B: Statistical Methodology, 1993
- Bayesian Computation Via the Gibbs Sampler and Related Markov Chain Monte Carlo MethodsJournal of the Royal Statistical Society Series B: Statistical Methodology, 1993
- Practical Markov Chain Monte CarloStatistical Science, 1992
- Gibbs sampling for marginal posterior expectationsCommunications in Statistics - Theory and Methods, 1991
- Sampling-Based Approaches to Calculating Marginal DensitiesJournal of the American Statistical Association, 1990
- The Calculation of Posterior Distributions by Data AugmentationJournal of the American Statistical Association, 1987
- Evidential reasoning using stochastic simulation of causal modelsArtificial Intelligence, 1987
- A Course in Functional AnalysisPublished by Springer Nature ,1985