Stochastic Gradient Hamiltonian Monte Carlo Methods with Recursive Variance Reduction

Abstract

Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) algorithms have received increasing attention in both theory and practice. In this paper, we propose a Stochastic Recursive Variance-Reduced gradient HMC (SRVR-HMC) algorithm. It makes use of a semi-stochastic gradient estimator that recursively accumulates the gradient information to reduce the variance of the stochastic gradient. We provide a convergence analysis of SRVR-HMC for sampling from a class of non-log-concave distributions and show that SRVR-HMC converges faster than all existing HMC-type algorithms based on underdamped Langevin dynamics. Thorough experiments on synthetic and real-world datasets validate our theory and demonstrate the superiority of SRVR-HMC.

Cite

Text

Zou et al. "Stochastic Gradient Hamiltonian Monte Carlo Methods with Recursive Variance Reduction." Neural Information Processing Systems, 2019.

Markdown

[Zou et al. "Stochastic Gradient Hamiltonian Monte Carlo Methods with Recursive Variance Reduction." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/zou2019neurips-stochastic/)

BibTeX

@inproceedings{zou2019neurips-stochastic,
  title     = {{Stochastic Gradient Hamiltonian Monte Carlo Methods with Recursive Variance Reduction}},
  author    = {Zou, Difan and Xu, Pan and Gu, Quanquan},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {3835-3846},
  url       = {https://mlanthology.org/neurips/2019/zou2019neurips-stochastic/}
}