How Many Topics? Stability Analysis for Topic Models
Abstract
Topic modeling refers to the task of discovering the underlying thematic structure in a text corpus, where the output is commonly presented as a report of the top terms appearing in each topic. Despite the diversity of topic modeling algorithms that have been proposed, a common challenge in successfully applying these techniques is the selection of an appropriate number of topics for a given corpus. Choosing too few topics will produce results that are overly broad, while choosing too many will result in the"over-clustering" of a corpus into many small, highly-similar topics. In this paper, we propose a term-centric stability analysis strategy to address this issue, the idea being that a model with an appropriate number of topics will be more robust to perturbations in the data. Using a topic modeling approach based on matrix factorization, evaluations performed on a range of corpora show that this strategy can successfully guide the model selection process.
Cite
Text
Greene et al. "How Many Topics? Stability Analysis for Topic Models." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2014. doi:10.1007/978-3-662-44848-9_32Markdown
[Greene et al. "How Many Topics? Stability Analysis for Topic Models." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2014.](https://mlanthology.org/ecmlpkdd/2014/greene2014ecmlpkdd-many/) doi:10.1007/978-3-662-44848-9_32BibTeX
@inproceedings{greene2014ecmlpkdd-many,
title = {{How Many Topics? Stability Analysis for Topic Models}},
author = {Greene, Derek and O'Callaghan, Derek and Cunningham, Padraig},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2014},
pages = {498-513},
doi = {10.1007/978-3-662-44848-9_32},
url = {https://mlanthology.org/ecmlpkdd/2014/greene2014ecmlpkdd-many/}
}