Loopy Belief Propagation for Bipartite Maximum Weight B-Matching

Abstract

We formulate the weighted b-matching objective function as a probability distribution function and prove that belief propagation (BP) on its graphical model converges to the optimum. Standard BP on our graphical model cannot be computed in polynomial time, but we introduce an algebraic method to circumvent the combinatorial message updates. Empirically, the resulting algorithm is on average faster than popular combinatorial implementations, while still scaling at the same asymptotic rate of $O(bn^3)$. Furthermore, the algorithm shows promising performance in machine learning applications.

Cite

Text

Huang and Jebara. "Loopy Belief Propagation for Bipartite Maximum Weight B-Matching." Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, 2007.

Markdown

[Huang and Jebara. "Loopy Belief Propagation for Bipartite Maximum Weight B-Matching." Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, 2007.](https://mlanthology.org/aistats/2007/huang2007aistats-loopy/)

BibTeX

@inproceedings{huang2007aistats-loopy,
  title     = {{Loopy Belief Propagation for Bipartite Maximum Weight B-Matching}},
  author    = {Huang, Bert and Jebara, Tony},
  booktitle = {Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics},
  year      = {2007},
  pages     = {195-202},
  volume    = {2},
  url       = {https://mlanthology.org/aistats/2007/huang2007aistats-loopy/}
}