Double-Loop Unadjusted Langevin Algorithm
Abstract
A well-known first-order method for sampling from log-concave probability distributions is the Unadjusted Langevin Algorithm (ULA). This work proposes a new annealing step-size schedule for ULA, which allows to prove new convergence guarantees for sampling from a smooth log-concave distribution, which are not covered by existing state-of-the-art convergence guarantees. To establish this result, we derive a new theoretical bound that relates the Wasserstein distance to total variation distance between any two log-concave distributions that complements the reach of Talagrand $T_2$ inequality. Moreover, applying this new step size schedule to an existing constrained sampling algorithm, we show state-of-the-art convergence rates for sampling from a constrained log-concave distribution, as well as improved dimension dependence.
Cite
Text
Rolland et al. "Double-Loop Unadjusted Langevin Algorithm." International Conference on Machine Learning, 2020.Markdown
[Rolland et al. "Double-Loop Unadjusted Langevin Algorithm." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/rolland2020icml-doubleloop/)BibTeX
@inproceedings{rolland2020icml-doubleloop,
title = {{Double-Loop Unadjusted Langevin Algorithm}},
author = {Rolland, Paul and Eftekhari, Armin and Kavis, Ali and Cevher, Volkan},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {8169-8177},
volume = {119},
url = {https://mlanthology.org/icml/2020/rolland2020icml-doubleloop/}
}