Langevin Quasi-Monte Carlo
Abstract
Langevin Monte Carlo (LMC) and its stochastic gradient versions are powerful algorithms for sampling from complex high-dimensional distributions. To sample from a distribution with density $\pi(\theta)\propto \exp(-U(\theta)) $, LMC iteratively generates the next sample by taking a step in the gradient direction $\nabla U$ with added Gaussian perturbations. Expectations w.r.t. the target distribution $\pi$ are estimated by averaging over LMC samples. In ordinary Monte Carlo, it is well known that the estimation error can be substantially reduced by replacing independent random samples by quasi-random samples like low-discrepancy sequences. In this work, we show that the estimation error of LMC can also be reduced by using quasi-random samples. Specifically, we propose to use completely uniformly distributed (CUD) sequences with certain low-discrepancy property to generate the Gaussian perturbations. Under smoothness and convexity conditions, we prove that LMC with a low-discrepancy CUD sequence achieves smaller error than standard LMC. The theoretical analysis is supported by compelling numerical experiments, which demonstrate the effectiveness of our approach.
Cite
Text
Liu. "Langevin Quasi-Monte Carlo." Neural Information Processing Systems, 2023.Markdown
[Liu. "Langevin Quasi-Monte Carlo." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/liu2023neurips-langevin/)BibTeX
@inproceedings{liu2023neurips-langevin,
title = {{Langevin Quasi-Monte Carlo}},
author = {Liu, Sifan},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/liu2023neurips-langevin/}
}