Explainable AI as Collaborative Task Solving
Abstract
We present a new framework for explainable AI systems (XAI) aimed at increasing human trust in the system's performance through explanations. Based on the Theory of Mind, our framework X-ToM explicitly models machine's mind, human's mind as inferred by the machine, as well as machine's mind as inferred by the human. These mental representations are incorporated to (1) learn an optimal explanation policy that takes into account human's perception and beliefs; and (2) quantitatively evaluate human's trust of machine behaviors. We have applied X-ToM in the context of visual recognition. Compared to the most popularly used attribution based explanations (saliency maps), our X-ToM significantly improves human trust in the underlying vision system.
Cite
Text
Akula et al. "Explainable AI as Collaborative Task Solving." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.Markdown
[Akula et al. "Explainable AI as Collaborative Task Solving." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.](https://mlanthology.org/cvprw/2019/akula2019cvprw-explainable/)BibTeX
@inproceedings{akula2019cvprw-explainable,
title = {{Explainable AI as Collaborative Task Solving}},
author = {Akula, Arjun R. and Liu, Changsong and Todorovic, Sinisa and Chai, Joyce Y. and Zhu, Song-Chun},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2019},
pages = {91-94},
url = {https://mlanthology.org/cvprw/2019/akula2019cvprw-explainable/}
}