Exponential Ergodicity of Mirror-Langevin Diffusions
Abstract
Motivated by the problem of sampling from ill-conditioned log-concave distributions, we give a clean non-asymptotic convergence analysis of mirror-Langevin diffusions as introduced in Zhang et al. (2020). As a special case of this framework, we propose a class of diffusions called Newton-Langevin diffusions and prove that they converge to stationarity exponentially fast with a rate which not only is dimension-free, but also has no dependence on the target distribution. We give an application of this result to the problem of sampling from the uniform distribution on a convex body using a strategy inspired by interior-point methods. Our general approach follows the recent trend of linking sampling and optimization and highlights the role of the chi-squared divergence. In particular, it yields new results on the convergence of the vanilla Langevin diffusion in Wasserstein distance.
Cite
Text
Chewi et al. "Exponential Ergodicity of Mirror-Langevin Diffusions." Neural Information Processing Systems, 2020.Markdown
[Chewi et al. "Exponential Ergodicity of Mirror-Langevin Diffusions." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/chewi2020neurips-exponential/)BibTeX
@inproceedings{chewi2020neurips-exponential,
title = {{Exponential Ergodicity of Mirror-Langevin Diffusions}},
author = {Chewi, Sinho and Le Gouic, Thibaut and Lu, Chen and Maunu, Tyler and Rigollet, Philippe and Stromme, Austin},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/chewi2020neurips-exponential/}
}