Accountability Layers: Explaining Complex System Failures by Parts
Abstract
With the rise of AI used for critical decision-making, many important predictions are made by complex and opaque AI algorithms. The aim of eXplainable Artificial Intelligence (XAI) is to make these opaque decision-making algorithms more transparent and trustworthy. This is often done by constructing an ``explainable model'' for a single modality or subsystem. However, this approach fails for complex systems that are made out of multiple parts. In this paper, I discuss how to explain complex system failures. I represent a complex machine as a hierarchical model of introspective sub-systems working together towards a common goal. The subsystems communicate in a common symbolic language. This work creates a set of explanatory accountability layers for trustworthy AI.
Cite
Text
Gilpin. "Accountability Layers: Explaining Complex System Failures by Parts." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I13.26806Markdown
[Gilpin. "Accountability Layers: Explaining Complex System Failures by Parts." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/gilpin2023aaai-accountability/) doi:10.1609/AAAI.V37I13.26806BibTeX
@inproceedings{gilpin2023aaai-accountability,
title = {{Accountability Layers: Explaining Complex System Failures by Parts}},
author = {Gilpin, Leilani H.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {15439},
doi = {10.1609/AAAI.V37I13.26806},
url = {https://mlanthology.org/aaai/2023/gilpin2023aaai-accountability/}
}