Tractable Learning for Complex Probability Queries

Abstract

Tractable learning aims to learn probabilistic models where inference is guaranteed to be efficient. However, the particular class of queries that is tractable depends on the model and underlying representation. Usually this class is MPE or conditional probabilities $\Pr(\xs|\ys)$ for joint assignments~$\xs,\ys$. We propose a tractable learner that guarantees efficient inference for a broader class of queries. It simultaneously learns a Markov network and its tractable circuit representation, in order to guarantee and measure tractability. Our approach differs from earlier work by using Sentential Decision Diagrams (SDD) as the tractable language instead of Arithmetic Circuits (AC). SDDs have desirable properties, which more general representations such as ACs lack, that enable basic primitives for Boolean circuit compilation. This allows us to support a broader class of complex probability queries, including counting, threshold, and parity, in polytime.

Cite

Text

Bekker et al. "Tractable Learning for Complex Probability Queries." Neural Information Processing Systems, 2015.

Markdown

[Bekker et al. "Tractable Learning for Complex Probability Queries." Neural Information Processing Systems, 2015.](https://mlanthology.org/neurips/2015/bekker2015neurips-tractable/)

BibTeX

@inproceedings{bekker2015neurips-tractable,
  title     = {{Tractable Learning for Complex Probability Queries}},
  author    = {Bekker, Jessa and Davis, Jesse and Choi, Arthur and Darwiche, Adnan and Van den Broeck, Guy},
  booktitle = {Neural Information Processing Systems},
  year      = {2015},
  pages     = {2242-2250},
  url       = {https://mlanthology.org/neurips/2015/bekker2015neurips-tractable/}
}