Underdamped Langevin MCMC: A Non-Asymptotic Analysis
Abstract
We study the underdamped Langevin diffusion when the log of the target distribution is smooth and strongly concave. We present a MCMC algorithm based on its discretization and show that it achieves $\varepsilon$ error (in 2-Wasserstein distance) in $\mathcal{O}(\sqrt{d}/\varepsilon)$ steps. This is a significant improvement over the best known rate for overdamped Langevin MCMC, which is $\mathcal{O}(d/\varepsilon^2)$ steps under the same smoothness/concavity assumptions. The underdamped Langevin MCMC scheme can be viewed as a version of Hamiltonian Monte Carlo (HMC) which has been observed to outperform overdamped Langevin MCMC methods in a number of application areas. We provide quantitative rates that support this empirical wisdom.
Cite
Text
Cheng et al. "Underdamped Langevin MCMC: A Non-Asymptotic Analysis." Annual Conference on Computational Learning Theory, 2018.Markdown
[Cheng et al. "Underdamped Langevin MCMC: A Non-Asymptotic Analysis." Annual Conference on Computational Learning Theory, 2018.](https://mlanthology.org/colt/2018/cheng2018colt-underdamped/)BibTeX
@inproceedings{cheng2018colt-underdamped,
title = {{Underdamped Langevin MCMC: A Non-Asymptotic Analysis}},
author = {Cheng, Xiang and Chatterji, Niladri S. and Bartlett, Peter L. and Jordan, Michael I.},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2018},
pages = {300-323},
url = {https://mlanthology.org/colt/2018/cheng2018colt-underdamped/}
}