Langevin Monte Carlo Without Smoothness
Abstract
Langevin Monte Carlo (LMC) is an iterative algorithm used to generate samples from a distribution that is known only up to a normalizing constant. The nonasymptotic dependence of its mixing time on the dimension and target accuracy is understood mainly in the setting of smooth (gradient-Lipschitz) log-densities, a serious limitation for applications in machine learning. In this paper, we remove this limitation, providing polynomial-time convergence guarantees for a variant of LMC in the setting of nonsmooth log-concave distributions. At a high level, our results follow by leveraging the implicit smoothing of the log-density that comes from a small Gaussian perturbation that we add to the iterates of the algorithm and controlling the bias and variance that are induced by this perturbation.
Cite
Text
Chatterji et al. "Langevin Monte Carlo Without Smoothness." Artificial Intelligence and Statistics, 2020.Markdown
[Chatterji et al. "Langevin Monte Carlo Without Smoothness." Artificial Intelligence and Statistics, 2020.](https://mlanthology.org/aistats/2020/chatterji2020aistats-langevin/)BibTeX
@inproceedings{chatterji2020aistats-langevin,
title = {{Langevin Monte Carlo Without Smoothness}},
author = {Chatterji, Niladri and Diakonikolas, Jelena and Jordan, Michael I. and Bartlett, Peter},
booktitle = {Artificial Intelligence and Statistics},
year = {2020},
pages = {1716-1726},
volume = {108},
url = {https://mlanthology.org/aistats/2020/chatterji2020aistats-langevin/}
}