Mixtures of All Trees
Abstract
Tree-shaped graphical models are widely used for their tractability. However, they unfortunately lack expressive power as they require committing to a particular sparse dependency structure. We propose a novel class of generative models called mixtures of all trees: that is, a mixture over all possible ($n^{n-2}$) tree-shaped graphical models over n variables. We show that it is possible to parameterize this Mixture of All Trees (MoAT) model compactly (using a polynomial-size representation) in a way that allows for tractable likelihood computation and optimization via stochastic gradient descent. Furthermore, by leveraging the tractability of tree-shaped models, we devise fast-converging conditional sampling algorithms for approximate inference, even though our theoretical analysis suggests that exact computation of marginals in the MoAT model is NP-hard. Empirically, MoAT achieves state-of-the-art performance on density estimation benchmarks when compared against powerful probabilistic models including hidden Chow-Liu Trees.
Cite
Text
Selvam et al. "Mixtures of All Trees." Artificial Intelligence and Statistics, 2023.Markdown
[Selvam et al. "Mixtures of All Trees." Artificial Intelligence and Statistics, 2023.](https://mlanthology.org/aistats/2023/selvam2023aistats-mixtures/)BibTeX
@inproceedings{selvam2023aistats-mixtures,
title = {{Mixtures of All Trees}},
author = {Selvam, Nikil Roashan and Zhang, Honghua and Broeck, Guy},
booktitle = {Artificial Intelligence and Statistics},
year = {2023},
pages = {11043-11058},
volume = {206},
url = {https://mlanthology.org/aistats/2023/selvam2023aistats-mixtures/}
}