Inference and Learning with Model Uncertainty in Probabilistic Logic Programs

Abstract

An issue that has so far received only limited attention in probabilistic logic programming (PLP) is the modelling of so-called epistemic uncertainty, the uncertainty about the model itself. Accurately quantifying this model uncertainty is paramount to robust inference, learning and ultimately decision making. We introduce BetaProbLog, a PLP language that can model epistemic uncertainty. BetaProbLog has sound semantics, an effective inference algorithm that combines Monte Carlo techniques with knowledge compilation, and a parameter learning algorithm. We empirically outperform state-of-the-art methods on probabilistic inference tasks in second-order Bayesian networks, digit classification and discriminative learning in the presence of epistemic uncertainty.

Cite

Text

Verreet et al. "Inference and Learning with Model Uncertainty in Probabilistic Logic Programs." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I9.21245

Markdown

[Verreet et al. "Inference and Learning with Model Uncertainty in Probabilistic Logic Programs." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/verreet2022aaai-inference/) doi:10.1609/AAAI.V36I9.21245

BibTeX

@inproceedings{verreet2022aaai-inference,
  title     = {{Inference and Learning with Model Uncertainty in Probabilistic Logic Programs}},
  author    = {Verreet, Victor and Derkinderen, Vincent and Dos Martires, Pedro Zuidberg and De Raedt, Luc},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {10060-10069},
  doi       = {10.1609/AAAI.V36I9.21245},
  url       = {https://mlanthology.org/aaai/2022/verreet2022aaai-inference/}
}