Skip to main content Skip to navigation

Paper No. 10-02

Download 10-02

K Latuszynski and JS Rosenthal

Adaptive Gibbs samplers

Abstract: We consider various versions of adaptive Gibbs and Metropolis-within-Gibbs samplers, which update their selection probabilities (and perhaps also their proposal distributions) on the y during a run, by learning as they go in an attempt to optimise the algorithm.We present a cautionary example of how even a simple-seeming adaptive Gibbs sampler may fail to converge.We then present various positive results guaranteeing convergence of adaptive Gibbs samplers under certain conditions. AMS 2000 subject classifications: Primary 60J05, 65C05; secondary 62F15.

Keywords: MCMC estimation, adaptive MCMC, Gibbs sampling.