Estimating Convergence of Markov Chains with L-Lag Couplings

Abstract

Markov chain Monte Carlo (MCMC) methods generate samples that are asymptotically distributed from a target distribution of interest as the number of iterations goes to infinity. Various theoretical results provide upper bounds on the distance between the target and marginal distribution after a fixed number of iterations. These upper bounds are on a case by case basis and typically involve intractable quantities, which limits their use for practitioners. We introduce L-lag couplings to generate computable, non-asymptotic upper bound estimates for the total variation or the Wasserstein distance of general Markov chains. We apply L-lag couplings to the tasks of (i) determining MCMC burn-in, (ii) comparing different MCMC algorithms with the same target, and (iii) comparing exact and approximate MCMC. Lastly, we (iv) assess the bias of sequential Monte Carlo and self-normalized importance samplers.

Cite

Text

Biswas et al. "Estimating Convergence of Markov Chains with L-Lag Couplings." Neural Information Processing Systems, 2019.

Markdown

[Biswas et al. "Estimating Convergence of Markov Chains with L-Lag Couplings." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/biswas2019neurips-estimating/)

BibTeX

@inproceedings{biswas2019neurips-estimating,
  title     = {{Estimating Convergence of Markov Chains with L-Lag Couplings}},
  author    = {Biswas, Niloy and Jacob, Pierre E and Vanetti, Paul},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {7391-7401},
  url       = {https://mlanthology.org/neurips/2019/biswas2019neurips-estimating/}
}