Probabilistic Inference with Algebraic Constraints: Theoretical Limits and Practical Approximations
Abstract
Weighted model integration (WMI) is a framework to perform advanced probabilistic inference on hybrid domains, i.e., on distributions over mixed continuous-discrete random variables and in presence of complex logical and arithmetic constraints. In this work, we advance the WMI framework on both the theoretical and algorithmic side. First, we exactly trace the boundaries of tractability for WMI inference by proving that to be amenable to exact and efficient inference a WMI problem has to posses a tree-shaped structure with logarithmic diameter. While this result deepens our theoretical understanding of WMI it hinders the practical applicability of exact WMI solvers to real-world problems. To overcome this, we propose the first approximate WMI solver that does not resort to sampling, but performs exact inference on one approximate models. Our solution performs message passing in a relaxed problem structure iteratively to recover certain lost dependencies and, as our experiments suggest, is competitive with other SOTA WMI solvers.
Cite
Text
Zeng et al. "Probabilistic Inference with Algebraic Constraints: Theoretical Limits and Practical Approximations." Neural Information Processing Systems, 2020.Markdown
[Zeng et al. "Probabilistic Inference with Algebraic Constraints: Theoretical Limits and Practical Approximations." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/zeng2020neurips-probabilistic/)BibTeX
@inproceedings{zeng2020neurips-probabilistic,
title = {{Probabilistic Inference with Algebraic Constraints: Theoretical Limits and Practical Approximations}},
author = {Zeng, Zhe and Morettin, Paolo and Yan, Fanqi and Vergari, Antonio and Van den Broeck, Guy},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/zeng2020neurips-probabilistic/}
}