Ancestor Sampling for Particle Gibbs

Abstract

We present a novel method in the family of particle MCMC methods that we refer to as particle Gibbs with ancestor sampling (PG-AS). Similarly to the existing PG with backward simulation (PG-BS) procedure, we use backward sampling to (considerably) improve the mixing of the PG kernel. Instead of using separate forward and backward sweeps as in PG-BS, however, we achieve the same effect in a single forward sweep. We apply the PG-AS framework to the challenging class of non-Markovian state-space models. We develop a truncation strategy of these models that is applicable in principle to any backward-simulation-based method, but which is particularly well suited to the PG-AS framework. In particular, as we show in a simulation study, PG-AS can yield an order-of-magnitude improved accuracy relative to PG-BS due to its robustness to the truncation error. Several application examples are discussed, including Rao-Blackwellized particle smoothing and inference in degenerate state-space models.

Cite

Text

Lindsten et al. "Ancestor Sampling for Particle Gibbs." Neural Information Processing Systems, 2012.

Markdown

[Lindsten et al. "Ancestor Sampling for Particle Gibbs." Neural Information Processing Systems, 2012.](https://mlanthology.org/neurips/2012/lindsten2012neurips-ancestor/)

BibTeX

@inproceedings{lindsten2012neurips-ancestor,
  title     = {{Ancestor Sampling for Particle Gibbs}},
  author    = {Lindsten, Fredrik and Schön, Thomas and Jordan, Michael I.},
  booktitle = {Neural Information Processing Systems},
  year      = {2012},
  pages     = {2591-2599},
  url       = {https://mlanthology.org/neurips/2012/lindsten2012neurips-ancestor/}
}