Sampling from a Log-Concave Distribution with Compact Support with Proximal Langevin Monte Carlo
Abstract
This paper presents a detailed theoretical analysis of the Langevin Monte Carlo sampling algorithm recently introduced in Durmus et al. (Efficient Bayesian computation by proximal Markov chain Monte Carlo: when Langevin meets Moreau, 2016) when applied to log-concave probability distributions that are restricted to a convex body $K$. This method relies on a regularisation procedure involving the Moreau-Yosida envelope of the indicator function associated with $K$. Explicit convergence bounds in total variation norm and in Wasserstein distance of order $1$ are established. In particular, we show that the complexity of this algorithm given a first order oracle is polynomial in the dimension of the state space. Finally, some numerical experiments are presented to compare our method with competing MCMC approaches from the literature.
Cite
Text
Brosse et al. "Sampling from a Log-Concave Distribution with Compact Support with Proximal Langevin Monte Carlo." Proceedings of the 2017 Conference on Learning Theory, 2017.Markdown
[Brosse et al. "Sampling from a Log-Concave Distribution with Compact Support with Proximal Langevin Monte Carlo." Proceedings of the 2017 Conference on Learning Theory, 2017.](https://mlanthology.org/colt/2017/brosse2017colt-sampling/)BibTeX
@inproceedings{brosse2017colt-sampling,
title = {{Sampling from a Log-Concave Distribution with Compact Support with Proximal Langevin Monte Carlo}},
author = {Brosse, Nicolas and Durmus, Alain and Moulines, Éric and Pereyra, Marcelo},
booktitle = {Proceedings of the 2017 Conference on Learning Theory},
year = {2017},
pages = {319-342},
volume = {65},
url = {https://mlanthology.org/colt/2017/brosse2017colt-sampling/}
}