Building More Explainable Artificial Intelligence with Argumentation

Abstract

Currently, much of machine learning is opaque, just like a "black box." However, in order for humans to understand, trust and effectively manage the emerging AI systems, an AI needs to be able to explain its decisions and conclusions. In this paper, I propose an argumentation-based approach to explainable AI, which has the potential to generate more comprehensive explanations than existing approaches.

Cite

Text

Zeng et al. "Building More Explainable Artificial Intelligence with Argumentation." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.11353

Markdown

[Zeng et al. "Building More Explainable Artificial Intelligence with Argumentation." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/zeng2018aaai-building/) doi:10.1609/AAAI.V32I1.11353

BibTeX

@inproceedings{zeng2018aaai-building,
  title     = {{Building More Explainable Artificial Intelligence with Argumentation}},
  author    = {Zeng, Zhiwei and Miao, Chunyan and Leung, Cyril and Chin, Jing Jih},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2018},
  pages     = {8044-8046},
  doi       = {10.1609/AAAI.V32I1.11353},
  url       = {https://mlanthology.org/aaai/2018/zeng2018aaai-building/}
}