Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex
Abstract
In this paper we investigate the use of Langevin Monte Carlo methods on the probability simplex and propose a new method, Stochastic gradient Riemannian Langevin dynamics, which is simple to implement and can be applied online. We apply this method to latent Dirichlet allocation in an online setting, and demonstrate that it achieves substantial performance improvements to the state of the art online variational Bayesian methods.
Cite
Text
Patterson and Teh. "Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex." Neural Information Processing Systems, 2013.Markdown
[Patterson and Teh. "Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex." Neural Information Processing Systems, 2013.](https://mlanthology.org/neurips/2013/patterson2013neurips-stochastic/)BibTeX
@inproceedings{patterson2013neurips-stochastic,
title = {{Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex}},
author = {Patterson, Sam and Teh, Yee Whye},
booktitle = {Neural Information Processing Systems},
year = {2013},
pages = {3102-3110},
url = {https://mlanthology.org/neurips/2013/patterson2013neurips-stochastic/}
}