Learning Mixtures of Tree Graphical Models
Abstract
We consider unsupervised estimation of mixtures of discrete graphical models, where the class variable is hidden and each mixture component can have a potentially different Markov graph structure and parameters over the observed variables. We propose a novel method for estimating the mixture components with provable guarantees. Our output is a tree-mixture model which serves as a good approximation to the underlying graphical model mixture. The sample and computational requirements for our method scale as $\poly(p, r)$, for an $r$-component mixture of $p$-variate graphical models, for a wide class of models which includes tree mixtures and mixtures over bounded degree graphs.
Cite
Text
Anandkumar et al. "Learning Mixtures of Tree Graphical Models." Neural Information Processing Systems, 2012.Markdown
[Anandkumar et al. "Learning Mixtures of Tree Graphical Models." Neural Information Processing Systems, 2012.](https://mlanthology.org/neurips/2012/anandkumar2012neurips-learning/)BibTeX
@inproceedings{anandkumar2012neurips-learning,
title = {{Learning Mixtures of Tree Graphical Models}},
author = {Anandkumar, Anima and Hsu, Daniel J. and Huang, Furong and Kakade, Sham M.},
booktitle = {Neural Information Processing Systems},
year = {2012},
pages = {1052-1060},
url = {https://mlanthology.org/neurips/2012/anandkumar2012neurips-learning/}
}