Sampling from Non-Log-Concave Distributions via Variance-Reduced Gradient Langevin Dynamics

Abstract

We study stochastic variance reduction-based Langevin dynamic algorithms, SVRG-LD and SAGA-LD \citep{dubey2016variance}, for sampling from non-log-concave distributions. Under certain assumptions on the log density function, we establish the convergence guarantees of SVRG-LD and SAGA-LD in $2$-Wasserstein distance. More specifically, we show that both SVRG-LD and SAGA-LD require $ \tilde O\big(n+n^{3/4}/\epsilon^2 + n^{1/2}/\epsilon^4\big)\cdot \exp\big(\tilde O(d+\gamma)\big)$ stochastic gradient evaluations to achieve $\epsilon$-accuracy in $2$-Wasserstein distance, which outperforms the $ \tilde O\big(n/\epsilon^4\big)\cdot \exp\big(\tilde O(d+\gamma)\big)$ gradient complexity achieved by Langevin Monte Carlo Method \citep{raginsky2017non}. Experiments on synthetic data and real data back up our theory.

Cite

Text

Zou et al. "Sampling from Non-Log-Concave Distributions via Variance-Reduced Gradient Langevin Dynamics." Artificial Intelligence and Statistics, 2019.

Markdown

[Zou et al. "Sampling from Non-Log-Concave Distributions via Variance-Reduced Gradient Langevin Dynamics." Artificial Intelligence and Statistics, 2019.](https://mlanthology.org/aistats/2019/zou2019aistats-sampling/)

BibTeX

@inproceedings{zou2019aistats-sampling,
  title     = {{Sampling from Non-Log-Concave Distributions via Variance-Reduced Gradient Langevin Dynamics}},
  author    = {Zou, Difan and Xu, Pan and Gu, Quanquan},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2019},
  pages     = {2936-2945},
  volume    = {89},
  url       = {https://mlanthology.org/aistats/2019/zou2019aistats-sampling/}
}