Langevin Monte Carlo and JKO Splitting

Abstract

Algorithms based on discretizing Langevin diffusion are popular tools for sampling from high-dimensional distributions. We develop novel connections between such Monte Carlo algorithms, the theory of Wasserstein gradient flow, and the operator splitting approach to solving PDEs. In particular, we show that a proximal version of the Unadjusted Langevin Algorithm corresponds to a scheme that alternates between solving the gradient flows of two specific functionals on the space of probability measures. Using this perspective, we derive some new non-asymptotic results on the convergence properties of this algorithm.

Cite

Text

Bernton. "Langevin Monte Carlo and JKO Splitting." Annual Conference on Computational Learning Theory, 2018.

Markdown

[Bernton. "Langevin Monte Carlo and JKO Splitting." Annual Conference on Computational Learning Theory, 2018.](https://mlanthology.org/colt/2018/bernton2018colt-langevin/)

BibTeX

@inproceedings{bernton2018colt-langevin,
  title     = {{Langevin Monte Carlo and JKO Splitting}},
  author    = {Bernton, Espen},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2018},
  pages     = {1777-1798},
  url       = {https://mlanthology.org/colt/2018/bernton2018colt-langevin/}
}