Understanding the Limiting Factors of Topic Modeling via Posterior Contraction Analysis

Abstract

Topic models such as the latent Dirichlet allocation (LDA) have become a standard staple in the modeling toolbox of machine learning. They have been applied to a vast variety of data sets, contexts, and tasks to varying degrees of success. However, to date there is almost no formal theory explicating the LDA’s behavior, and despite its familiarity there is very little systematic analysis of and guidance on the properties of the data that affect the inferential performance of the model. This paper seeks to address this gap, by providing a systematic analysis of factors which characterize the LDA’s performance. We present theorems elucidating the posterior contraction rates of the topics as the amount of data increases, and a thorough supporting empirical study using synthetic and real data sets, including news and web-based articles and tweet messages. Based on these results we provide practical guidance on how to identify suitable data sets for topic models, and how to specify particular model parameters.

Cite

Text

Tang et al. "Understanding the Limiting Factors of Topic Modeling via Posterior Contraction Analysis." International Conference on Machine Learning, 2014.

Markdown

[Tang et al. "Understanding the Limiting Factors of Topic Modeling via Posterior Contraction Analysis." International Conference on Machine Learning, 2014.](https://mlanthology.org/icml/2014/tang2014icml-understanding/)

BibTeX

@inproceedings{tang2014icml-understanding,
  title     = {{Understanding the Limiting Factors of Topic Modeling via Posterior Contraction Analysis}},
  author    = {Tang, Jian and Meng, Zhaoshi and Nguyen, Xuanlong and Mei, Qiaozhu and Zhang, Ming},
  booktitle = {International Conference on Machine Learning},
  year      = {2014},
  pages     = {190-198},
  volume    = {32},
  url       = {https://mlanthology.org/icml/2014/tang2014icml-understanding/}
}