Analysis of Langevin Monte Carlo via Convex Optimization
Abstract
In this paper, we provide new insights on the Unadjusted Langevin Algorithm. We show that this method can be formulated as the first order optimization algorithm for an objective functional defined on the Wasserstein space of order $2$. Using this interpretation and techniques borrowed from convex optimization, we give a non-asymptotic analysis of this method to sample from log-concave smooth target distribution on $\mathbb{R}^d$. Based on this interpretation, we propose two new methods for sampling from a non-smooth target distribution. These new algorithms are natural extensions of the Stochastic Gradient Langevin Dynamics (SGLD) algorithm, which is a popular extension of the Unadjusted Langevin Algorithm for largescale Bayesian inference. Using the optimization perspective, we provide non-asymptotic convergence analysis for the newly proposed methods.
Cite
Text
Durmus et al. "Analysis of Langevin Monte Carlo via Convex Optimization." Journal of Machine Learning Research, 2019.Markdown
[Durmus et al. "Analysis of Langevin Monte Carlo via Convex Optimization." Journal of Machine Learning Research, 2019.](https://mlanthology.org/jmlr/2019/durmus2019jmlr-analysis/)BibTeX
@article{durmus2019jmlr-analysis,
title = {{Analysis of Langevin Monte Carlo via Convex Optimization}},
author = {Durmus, Alain and Majewski, Szymon and Miasojedow, Błażej},
journal = {Journal of Machine Learning Research},
year = {2019},
pages = {1-46},
volume = {20},
url = {https://mlanthology.org/jmlr/2019/durmus2019jmlr-analysis/}
}