Variance Reduction for Random Coordinate Descent-Langevin Monte Carlo

Abstract

Sampling from a log-concave distribution function is one core problem that has wide applications in Bayesian statistics and machine learning. While most gradient free methods have slow convergence rate, the Langevin Monte Carlo (LMC) that provides fast convergence requires the computation of gradients. In practice one uses finite-differencing approximations as surrogates, and the method is expensive in high-dimensions.

Cite

Text

Ding and Li. "Variance Reduction for Random Coordinate Descent-Langevin Monte Carlo." Neural Information Processing Systems, 2020.

Markdown

[Ding and Li. "Variance Reduction for Random Coordinate Descent-Langevin Monte Carlo." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/ding2020neurips-variance/)

BibTeX

@inproceedings{ding2020neurips-variance,
  title     = {{Variance Reduction for Random Coordinate Descent-Langevin Monte Carlo}},
  author    = {Ding, Zhiyan and Li, Qin},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/ding2020neurips-variance/}
}