Improving Optimization-Based Approximate Inference by Clamping Variables

Abstract

While central to the application of probabilistic models to discrete data, the problem of marginal inference is in general intractable and efficient approximation schemes need to exploit the problem structure. Recently, there have been efforts to develop inference techniques that do not necessarily make factorization assumptions about the distribution, but rather exploit the fact that sometimes there exist efficient algorithms for finding the MAP configuration. In this paper, we theoretically prove that for discrete multi-label models the bounds on the partition function obtained by two of these approaches, Perturb-and-MAP and the bound from the infinite Renyi divergence, can be only improved by clamping any subset of the variables. For the case of log-supermodular models we provide a more detailed analysis and develop a set of efficient strategies for choosing the order in which the variables should be clamped. Finally, we present a number of numerical experiments showcasing the improvements obtained by the proposed methods on several models.

Cite

Text

Zhao et al. "Improving Optimization-Based Approximate Inference by Clamping Variables." Conference on Uncertainty in Artificial Intelligence, 2017.

Markdown

[Zhao et al. "Improving Optimization-Based Approximate Inference by Clamping Variables." Conference on Uncertainty in Artificial Intelligence, 2017.](https://mlanthology.org/uai/2017/zhao2017uai-improving/)

BibTeX

@inproceedings{zhao2017uai-improving,
  title     = {{Improving Optimization-Based Approximate Inference by Clamping Variables}},
  author    = {Zhao, Junyao and Djolonga, Josip and Tschiatschek, Sebastian and Krause, Andreas},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2017},
  url       = {https://mlanthology.org/uai/2017/zhao2017uai-improving/}
}