Rapid Convergence of the Unadjusted Langevin Algorithm: Isoperimetry Suffices
Abstract
We study the Unadjusted Langevin Algorithm (ULA) for sampling from a probability distribution $\nu = e^{-f}$ on $\R^n$. We prove a convergence guarantee in Kullback-Leibler (KL) divergence assuming $\nu$ satisfies log-Sobolev inequality and $f$ has bounded Hessian. Notably, we do not assume convexity or bounds on higher derivatives. We also prove convergence guarantees in R\'enyi divergence of order $q > 1$ assuming the limit of ULA satisfies either log-Sobolev or Poincar\'e inequality.
Cite
Text
Vempala and Wibisono. "Rapid Convergence of the Unadjusted Langevin Algorithm: Isoperimetry Suffices." Neural Information Processing Systems, 2019.Markdown
[Vempala and Wibisono. "Rapid Convergence of the Unadjusted Langevin Algorithm: Isoperimetry Suffices." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/vempala2019neurips-rapid/)BibTeX
@inproceedings{vempala2019neurips-rapid,
title = {{Rapid Convergence of the Unadjusted Langevin Algorithm: Isoperimetry Suffices}},
author = {Vempala, Santosh and Wibisono, Andre},
booktitle = {Neural Information Processing Systems},
year = {2019},
pages = {8094-8106},
url = {https://mlanthology.org/neurips/2019/vempala2019neurips-rapid/}
}