Evidential Reasoning and Learning: A Survey

Abstract

When collaborating with an artificial intelligence (AI) system, we need to assess when to trust its recommendations. Suppose we mistakenly trust it in regions where it is likely to err. In that case, catastrophic failures may occur, hence the need for Bayesian approaches for reasoning and learning to determine the confidence (or epistemic uncertainty) in the probabilities of the queried outcome. Pure Bayesian methods, however, suffer from high computational costs. To overcome them, we revert to efficient and effective approximations. In this paper, we focus on techniques that take the name of evidential reasoning and learning from the process of Bayesian update of given hypotheses based on additional evidence. This paper provides the reader with a gentle introduction to the area of investigation, the up-to-date research outcomes, and the open questions still left unanswered.

Cite

Text

Cerutti et al. "Evidential Reasoning and Learning: A Survey." International Joint Conference on Artificial Intelligence, 2022. doi:10.24963/IJCAI.2022/760

Markdown

[Cerutti et al. "Evidential Reasoning and Learning: A Survey." International Joint Conference on Artificial Intelligence, 2022.](https://mlanthology.org/ijcai/2022/cerutti2022ijcai-evidential/) doi:10.24963/IJCAI.2022/760

BibTeX

@inproceedings{cerutti2022ijcai-evidential,
  title     = {{Evidential Reasoning and Learning: A Survey}},
  author    = {Cerutti, Federico and Kaplan, Lance M. and Sensoy, Murat},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {5418-5425},
  doi       = {10.24963/IJCAI.2022/760},
  url       = {https://mlanthology.org/ijcai/2022/cerutti2022ijcai-evidential/}
}