Optimal Monte Carlo Estimation of Belief Network Inference

Abstract

We present two Monte Carlo sampling algorithms for probabilistic inference that guarantee polynomial-time convergence for a larger class of network than current sampling algorithms provide. These new methods are variants of the known likelihood weighting algorithm. We use of recent advances in the theory of optimal stopping rules for Monte Carlo simulation to obtain an inference approximation with relative error e and a small failure probability δ. We present an empirical evaluation of the algorithms which demonstrates their improved performance.

Cite

Text

Pradhan and Dagum. "Optimal Monte Carlo Estimation of Belief Network Inference." Conference on Uncertainty in Artificial Intelligence, 1996.

Markdown

[Pradhan and Dagum. "Optimal Monte Carlo Estimation of Belief Network Inference." Conference on Uncertainty in Artificial Intelligence, 1996.](https://mlanthology.org/uai/1996/pradhan1996uai-optimal/)

BibTeX

@inproceedings{pradhan1996uai-optimal,
  title     = {{Optimal Monte Carlo Estimation of Belief Network Inference}},
  author    = {Pradhan, Malcolm and Dagum, Paul},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {1996},
  pages     = {446-453},
  url       = {https://mlanthology.org/uai/1996/pradhan1996uai-optimal/}
}