Budgeted Online Collective Inference

Abstract

Updating inference in response to new evidence is a fundamental challenge in artificial intelligence. Many real problems require large probabilistic graphical models, containing possibly millions of interdependent variables. For such large models, jointly updating the most likely (i.e., MAP) configuration of the variables each time new evidence is encountered can be infeasible, even if inference is tractable. In this paper, we explore budgeted online collective inference , in which the MAP configuration of a graphical model is updated efficiently by revising the assignments to a subset of the variables while holding others fixed. The goal is to selectively update certain variables without sacrificing quality with respect to full inference. To formalize the consequences of partially updating inference, we introduce the concept of inference regret . We derive inference regret bounds for a class of graphical models with strongly-convex free energies. These theoretical insights, combined with a thorough analysis of the optimization solver, motivate several new approximate methods for efficiently updating the variable assignments under a budget constraint. In experiments, we demonstrate that our algorithms can reduce inference time by 65\%, and with accuracy comparable to full inference.

Cite

Text

Pujara et al. "Budgeted Online Collective Inference." Conference on Uncertainty in Artificial Intelligence, 2015.

Markdown

[Pujara et al. "Budgeted Online Collective Inference." Conference on Uncertainty in Artificial Intelligence, 2015.](https://mlanthology.org/uai/2015/pujara2015uai-budgeted/)

BibTeX

@inproceedings{pujara2015uai-budgeted,
  title     = {{Budgeted Online Collective Inference}},
  author    = {Pujara, Jay and London, Ben and Getoor, Lise},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2015},
  pages     = {712-721},
  url       = {https://mlanthology.org/uai/2015/pujara2015uai-budgeted/}
}