A Reduction for Efficient LDA Topic Reconstruction

Abstract

We present a novel approach for LDA (Latent Dirichlet Allocation) topic reconstruction. The main technical idea is to show that the distribution over the documents generated by LDA can be transformed into a distribution for a much simpler generative model in which documents are generated from {\em the same set of topics} but have a much simpler structure: documents are single topic and topics are chosen uniformly at random. Furthermore, this reduction is approximation preserving, in the sense that approximate distributions-- the only ones we can hope to compute in practice-- are mapped into approximate distribution in the simplified world. This opens up the possibility of efficiently reconstructing LDA topics in a roundabout way. Compute an approximate document distribution from the given corpus, transform it into an approximate distribution for the single-topic world, and run a reconstruction algorithm in the uniform, single topic world-- a much simpler task than direct LDA reconstruction. Indeed, we show the viability of the approach by giving very simple algorithms for a generalization of two notable cases that have been studied in the literature, $p$-separability and Gibbs sampling for matrix-like topics.

Cite

Text

Almanza et al. "A Reduction for Efficient LDA Topic Reconstruction." Neural Information Processing Systems, 2018.

Markdown

[Almanza et al. "A Reduction for Efficient LDA Topic Reconstruction." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/almanza2018neurips-reduction/)

BibTeX

@inproceedings{almanza2018neurips-reduction,
  title     = {{A Reduction for Efficient LDA Topic Reconstruction}},
  author    = {Almanza, Matteo and Chierichetti, Flavio and Panconesi, Alessandro and Vattani, Andrea},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {7869-7879},
  url       = {https://mlanthology.org/neurips/2018/almanza2018neurips-reduction/}
}