Building Explainable Artificial Intelligence Systems

Abstract

As artificial intelligence (AI) systems and behavior models in military simulations become increasingly complex, it has been difficult for users to understand the activities of computer-controlled entities. Prototype explanation systems have been added to simulators, but designers have not heeded the lessons learned from work in explaining expert system behavior. These new explanation systems are not modular and not portable; they are tied to a particular AI system. In this paper, we present a modular and generic architecture for explaining the behavior of simulated entities. We describe its application to the Virtual Humans, a simulation designed to teach soft skills such as negotiation and cultural awareness.

Cite

Text

Core et al. "Building Explainable Artificial Intelligence Systems." AAAI Conference on Artificial Intelligence, 2006. doi:10.21236/ada459166

Markdown

[Core et al. "Building Explainable Artificial Intelligence Systems." AAAI Conference on Artificial Intelligence, 2006.](https://mlanthology.org/aaai/2006/core2006aaai-building/) doi:10.21236/ada459166

BibTeX

@inproceedings{core2006aaai-building,
  title     = {{Building Explainable Artificial Intelligence Systems}},
  author    = {Core, Mark G. and Lane, H. Chad and van Lent, Michael and Gomboc, Dave and Solomon, Steve and Rosenberg, Milton},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2006},
  pages     = {1766-1773},
  doi       = {10.21236/ada459166},
  url       = {https://mlanthology.org/aaai/2006/core2006aaai-building/}
}