Convergence of Langevin MCMC in KL-Divergence
Abstract
Langevin diffusion is a commonly used tool for sampling from a given distribution. In this work, we establish that when the target density $\p^*$ is such that $\log \p^*$ is $L$ smooth and $m$ strongly convex, discrete Langevin diffusion produces a distribution $\p$ with $\KL{\p}{\p^*}≤ε$ in $\tilde{O}(\frac{d}{ε})$ steps, where $d$ is the dimension of the sample space. We also study the convergence rate when the strong-convexity assumption is absent. By considering the Langevin diffusion as a gradient flow in the space of probability distributions, we obtain an elegant analysis that applies to the stronger property of convergence in KL-divergence and gives a conceptually simpler proof of the best-known convergence results in weaker metrics.
Cite
Text
Cheng and Bartlett. "Convergence of Langevin MCMC in KL-Divergence." Proceedings of Algorithmic Learning Theory, 2018.Markdown
[Cheng and Bartlett. "Convergence of Langevin MCMC in KL-Divergence." Proceedings of Algorithmic Learning Theory, 2018.](https://mlanthology.org/alt/2018/cheng2018alt-convergence/)BibTeX
@inproceedings{cheng2018alt-convergence,
title = {{Convergence of Langevin MCMC in KL-Divergence}},
author = {Cheng, Xiang and Bartlett, Peter},
booktitle = {Proceedings of Algorithmic Learning Theory},
year = {2018},
pages = {186-211},
volume = {83},
url = {https://mlanthology.org/alt/2018/cheng2018alt-convergence/}
}