Scalable Inference for Logistic-Normal Topic Models
Abstract
Logistic-normal topic models can effectively discover correlation structures among latent topics. However, their inference remains a challenge because of the non-conjugacy between the logistic-normal prior and multinomial topic mixing proportions. Existing algorithms either make restricting mean-field assumptions or are not scalable to large-scale applications. This paper presents a partially collapsed Gibbs sampling algorithm that approaches the provably correct distribution by exploring the ideas of data augmentation. To improve time efficiency, we further present a parallel implementation that can deal with large-scale applications and learn the correlation structures of thousands of topics from millions of documents. Extensive empirical results demonstrate the promise.
Cite
Text
Chen et al. "Scalable Inference for Logistic-Normal Topic Models." Neural Information Processing Systems, 2013.Markdown
[Chen et al. "Scalable Inference for Logistic-Normal Topic Models." Neural Information Processing Systems, 2013.](https://mlanthology.org/neurips/2013/chen2013neurips-scalable/)BibTeX
@inproceedings{chen2013neurips-scalable,
title = {{Scalable Inference for Logistic-Normal Topic Models}},
author = {Chen, Jianfei and Zhu, Jun and Wang, Zi and Zheng, Xun and Zhang, Bo},
booktitle = {Neural Information Processing Systems},
year = {2013},
pages = {2445-2453},
url = {https://mlanthology.org/neurips/2013/chen2013neurips-scalable/}
}