Efficient Training of LDA on a GPU by Mean-for-Mode Estimation
Abstract
We introduce Mean-for-Mode estimation, a variant of an uncollapsed Gibbs sampler that we use to train LDA on a GPU. The algorithm combines benefits of both uncollapsed and collapsed Gibbs samplers. Like a collapsed Gibbs sampler — and unlike an uncollapsed Gibbs sampler — it has good statistical performance, and can use sampling complexity reduction techniques such as sparsity. Meanwhile, like an uncollapsed Gibbs sampler — and unlike a collapsed Gibbs sampler — it is embarrassingly parallel, and can use approximate counters.
Cite
Text
Tristan et al. "Efficient Training of LDA on a GPU by Mean-for-Mode Estimation." International Conference on Machine Learning, 2015.Markdown
[Tristan et al. "Efficient Training of LDA on a GPU by Mean-for-Mode Estimation." International Conference on Machine Learning, 2015.](https://mlanthology.org/icml/2015/tristan2015icml-efficient/)BibTeX
@inproceedings{tristan2015icml-efficient,
title = {{Efficient Training of LDA on a GPU by Mean-for-Mode Estimation}},
author = {Tristan, Jean-Baptiste and Tassarotti, Joseph and Steele, Guy},
booktitle = {International Conference on Machine Learning},
year = {2015},
pages = {59-68},
volume = {37},
url = {https://mlanthology.org/icml/2015/tristan2015icml-efficient/}
}