Exploiting Data-Independence for Fast Belief-Propagation

Abstract

Maximum a posteriori (MAP) inference in graphical models requires that we maximize the sum of two terms: a data-dependent term, encoding the conditional likelihood of a certain labeling given an observation, and a data-independent term, encoding some prior on labelings. Often, data-dependent factors contain fewer latent variables than data-independent factors -- for instance, many grid and tree-structured models contain only first-order conditionals despite having pair wise priors. In this paper, we note that MAP-inference in such models can be made substantially faster by appropriately preprocessing their data-independent terms. Our main result is to show that message-passing in any such pair wise model has an expected-case exponent of only 1.5 on the number of states per node, leading to significant improvements over existing quadratic-time solutions.

Cite

Text

McAuley and Caetano. "Exploiting Data-Independence for Fast Belief-Propagation." International Conference on Machine Learning, 2010.

Markdown

[McAuley and Caetano. "Exploiting Data-Independence for Fast Belief-Propagation." International Conference on Machine Learning, 2010.](https://mlanthology.org/icml/2010/mcauley2010icml-exploiting/)

BibTeX

@inproceedings{mcauley2010icml-exploiting,
  title     = {{Exploiting Data-Independence for Fast Belief-Propagation}},
  author    = {McAuley, Julian J. and Caetano, Tibério S.},
  booktitle = {International Conference on Machine Learning},
  year      = {2010},
  pages     = {767-774},
  url       = {https://mlanthology.org/icml/2010/mcauley2010icml-exploiting/}
}