Approximate Inference by Compilation to Arithmetic Circuits

Abstract

Arithmetic circuits (ACs) exploit context-specific independence and determinism to allow exact inference even in networks with high treewidth. In this paper, we introduce the first ever approximate inference methods using ACs, for domains where exact inference remains intractable. We propose and evaluate a variety of techniques based on exact compilation, forward sampling, AC structure learning, Markov network parameter learning, variational inference, and Gibbs sampling. In experiments on eight challenging real-world domains, we find that the methods based on sampling and learning work best: one such method (AC2-F) is faster and usually more accurate than loopy belief propagation, mean field, and Gibbs sampling; another (AC2-G) has a running time similar to Gibbs sampling but is consistently more accurate than all baselines.

Cite

Text

Lowd and Domingos. "Approximate Inference by Compilation to Arithmetic Circuits." Neural Information Processing Systems, 2010.

Markdown

[Lowd and Domingos. "Approximate Inference by Compilation to Arithmetic Circuits." Neural Information Processing Systems, 2010.](https://mlanthology.org/neurips/2010/lowd2010neurips-approximate/)

BibTeX

@inproceedings{lowd2010neurips-approximate,
  title     = {{Approximate Inference by Compilation to Arithmetic Circuits}},
  author    = {Lowd, Daniel and Domingos, Pedro},
  booktitle = {Neural Information Processing Systems},
  year      = {2010},
  pages     = {1477-1485},
  url       = {https://mlanthology.org/neurips/2010/lowd2010neurips-approximate/}
}