Explaining the Uncertainty in AI-Assisted Decision Making

Abstract

The aim of this project is to improve human decision-making using explainability; specifically, how to explain the (un)certainty of machine learning models. Prior research has used uncertainty measures to promote trust and decision-making. However, the direction of explaining why the AI prediction is confident (or not confident) in its prediction needs to be addressed. By explaining the model uncertainty, we can promote trust, improve understanding and improve decision-making for users.

Cite

Text

Le. "Explaining the Uncertainty in AI-Assisted Decision Making." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I13.26920

Markdown

[Le. "Explaining the Uncertainty in AI-Assisted Decision Making." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/le2023aaai-explaining-a/) doi:10.1609/AAAI.V37I13.26920

BibTeX

@inproceedings{le2023aaai-explaining-a,
  title     = {{Explaining the Uncertainty in AI-Assisted Decision Making}},
  author    = {Le, Thao},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {16119-16120},
  doi       = {10.1609/AAAI.V37I13.26920},
  url       = {https://mlanthology.org/aaai/2023/le2023aaai-explaining-a/}
}