Understanding the Bethe Approximation: When and How Can It Go Wrong?

Abstract

Belief propagation is a remarkably effective tool for inference, even when applied to networks with cycles. It may be viewed as a way to seek the minimum of the Bethe free energy, though with no convergence guarantee in general. A variational perspective shows that, compared to exact inference, this minimization employs two forms of approximation: (i) the true entropy is approximated by the Bethe entropy, and (ii) the minimization is performed over a relaxation of the marginal polytope termed the local polytope. Here we explore when and how the Bethe ap-proximation can fail for binary pairwise models by examining each aspect of the approximation, deriving results both analytically and with new experimental methods. 1

Cite

Text

Weller et al. "Understanding the Bethe Approximation: When and How Can It Go Wrong?." Conference on Uncertainty in Artificial Intelligence, 2014.

Markdown

[Weller et al. "Understanding the Bethe Approximation: When and How Can It Go Wrong?." Conference on Uncertainty in Artificial Intelligence, 2014.](https://mlanthology.org/uai/2014/weller2014uai-understanding/)

BibTeX

@inproceedings{weller2014uai-understanding,
  title     = {{Understanding the Bethe Approximation: When and How Can It Go Wrong?}},
  author    = {Weller, Adrian and Tang, Kui and Jebara, Tony and Sontag, David A.},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2014},
  pages     = {868-877},
  url       = {https://mlanthology.org/uai/2014/weller2014uai-understanding/}
}