Reformulating Inference Problems Through Selective Conditioning

Abstract

We describe how we selectively reformulate portions of a belief network that pose difficulties for solution with a stochastic-simulation algorithm. With employ the selective conditioning approach to target specific nodes in a belief network for decomposition, based on the contribution the nodes make to the tractability of stochastic simulation. We review previous work on BNRAS algorithms- randomized approximation algorithms for probabilistic inference. We show how selective conditioning can be employed to reformulate a single BNRAS problem into multiple tractable BNRAS simulation problems. We discuss how we can use another simulation algorithm-logic sampling-to solve a component of the inference problem that provides a means for knitting the solutions of individual subproblems into a final result. Finally, we analyze tradeoffs among the computational subtasks associated with the selective conditioning approach to reformulation.

Cite

Text

Dagum and Horvitz. "Reformulating Inference Problems Through Selective Conditioning." Conference on Uncertainty in Artificial Intelligence, 1992. doi:10.1016/B978-1-4832-8287-9.50011-6

Markdown

[Dagum and Horvitz. "Reformulating Inference Problems Through Selective Conditioning." Conference on Uncertainty in Artificial Intelligence, 1992.](https://mlanthology.org/uai/1992/dagum1992uai-reformulating/) doi:10.1016/B978-1-4832-8287-9.50011-6

BibTeX

@inproceedings{dagum1992uai-reformulating,
  title     = {{Reformulating Inference Problems Through Selective Conditioning}},
  author    = {Dagum, Paul and Horvitz, Eric},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {1992},
  pages     = {49-54},
  doi       = {10.1016/B978-1-4832-8287-9.50011-6},
  url       = {https://mlanthology.org/uai/1992/dagum1992uai-reformulating/}
}