Analysis of Langevin Monte Carlo from Poincare to Log-Sobolev

Abstract

Classically, the continuous-time Langevin diffusion converges exponentially fast to its stationary distribution $\pi$ under the sole assumption that $\pi$ satisfies a Poincaré inequality. Using this fact to provide guarantees for the discrete-time Langevin Monte Carlo (LMC) algorithm, however, is considerably more challenging due to the need for working with chi-squared or Rényi divergences, and prior works have largely focused on strongly log-concave targets. In this work, we provide the first convergence guarantees for LMC assuming that $\pi$ satisfies either a Latała–Oleszkiewicz or modified log-Sobolev inequality, which interpolates between the Poincaré and log-Sobolev settings. Unlike prior works, our results allow for weak smoothness and do not require convexity or dissipativity conditions.

Cite

Text

Chewi et al. "Analysis of Langevin Monte Carlo from Poincare to Log-Sobolev." Conference on Learning Theory, 2022.

Markdown

[Chewi et al. "Analysis of Langevin Monte Carlo from Poincare to Log-Sobolev." Conference on Learning Theory, 2022.](https://mlanthology.org/colt/2022/chewi2022colt-analysis/)

BibTeX

@inproceedings{chewi2022colt-analysis,
  title     = {{Analysis of Langevin Monte Carlo from Poincare to Log-Sobolev}},
  author    = {Chewi, Sinho and Erdogdu, Murat A and Li, Mufan and Shen, Ruoqi and Zhang, Shunshi},
  booktitle = {Conference on Learning Theory},
  year      = {2022},
  pages     = {1-2},
  volume    = {178},
  url       = {https://mlanthology.org/colt/2022/chewi2022colt-analysis/}
}