Probabilistic Variational Bounds for Graphical Models

Abstract

Variational algorithms such as tree-reweighted belief propagation can provide deterministic bounds on the partition function, but are often loose and difficult to use in an ``any-time'' fashion, expending more computation for tighter bounds. On the other hand, Monte Carlo estimators such as importance sampling have excellent any-time behavior, but depend critically on the proposal distribution. We propose a simple Monte Carlo based inference method that augments convex variational bounds by adding importance sampling (IS). We argue that convex variational methods naturally provide good IS proposals that ``cover the probability of the target distribution, and reinterpret the variational optimization as designing a proposal to minimizes an upper bound on the variance of our IS estimator. This both provides an accurate estimator and enables the construction of any-time probabilistic bounds that improve quickly and directly on state of-the-art variational bounds, which provide certificates of accuracy given enough samples relative to the error in the initial bound.

Cite

Text

Liu et al. "Probabilistic Variational Bounds for Graphical Models." Neural Information Processing Systems, 2015.

Markdown

[Liu et al. "Probabilistic Variational Bounds for Graphical Models." Neural Information Processing Systems, 2015.](https://mlanthology.org/neurips/2015/liu2015neurips-probabilistic/)

BibTeX

@inproceedings{liu2015neurips-probabilistic,
  title     = {{Probabilistic Variational Bounds for Graphical Models}},
  author    = {Liu, Qiang and Iii, John W. Fisher and Ihler, Alex},
  booktitle = {Neural Information Processing Systems},
  year      = {2015},
  pages     = {1432-1440},
  url       = {https://mlanthology.org/neurips/2015/liu2015neurips-probabilistic/}
}