Memory Bounded Inference in Topic Models
Abstract
What type of algorithms and statistical techniques support learning from very large datasets over long stretches of time? We address this question through a memory bounded version of a variational EM algorithm that approximates inference of a topic model. The algorithm alternates two phases: "model building" and "model compression" in order to always satisfy a given memory constraint. The model building phase grows its internal representation (the number of topics) as more data arrives through Bayesian model selection. Compression is achieved by merging data-items in clumps and only caching their sufficient statistics. Empirically, the resulting algorithm is able to handle datasets that are orders of magnitude larger than the standard batch version.
Cite
Text
Gomes et al. "Memory Bounded Inference in Topic Models." International Conference on Machine Learning, 2008. doi:10.1145/1390156.1390200Markdown
[Gomes et al. "Memory Bounded Inference in Topic Models." International Conference on Machine Learning, 2008.](https://mlanthology.org/icml/2008/gomes2008icml-memory/) doi:10.1145/1390156.1390200BibTeX
@inproceedings{gomes2008icml-memory,
title = {{Memory Bounded Inference in Topic Models}},
author = {Gomes, Ryan and Welling, Max and Perona, Pietro},
booktitle = {International Conference on Machine Learning},
year = {2008},
pages = {344-351},
doi = {10.1145/1390156.1390200},
url = {https://mlanthology.org/icml/2008/gomes2008icml-memory/}
}