Variational Inference in Mixed Probabilistic Submodular Models
Abstract
We consider the problem of variational inference in probabilistic models with both log-submodular and log-supermodular higher-order potentials. These models can represent arbitrary distributions over binary variables, and thus generalize the commonly used pairwise Markov random fields and models with log-supermodular potentials only, for which efficient approximate inference algorithms are known. While inference in the considered models is #P-hard in general, we present efficient approximate algorithms exploiting recent advances in the field of discrete optimization. We demonstrate the effectiveness of our approach in a large set of experiments, where our model allows reasoning about preferences over sets of items with complements and substitutes.
Cite
Text
Djolonga et al. "Variational Inference in Mixed Probabilistic Submodular Models." Neural Information Processing Systems, 2016.Markdown
[Djolonga et al. "Variational Inference in Mixed Probabilistic Submodular Models." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/djolonga2016neurips-variational/)BibTeX
@inproceedings{djolonga2016neurips-variational,
title = {{Variational Inference in Mixed Probabilistic Submodular Models}},
author = {Djolonga, Josip and Tschiatschek, Sebastian and Krause, Andreas},
booktitle = {Neural Information Processing Systems},
year = {2016},
pages = {1759-1767},
url = {https://mlanthology.org/neurips/2016/djolonga2016neurips-variational/}
}