Actionable Model-Centric Explanations (Student Abstract)
Abstract
We recommend using a model-centric, Boolean Satisfiability (SAT) formalism to obtain useful explanations of trained model behavior, different and complementary to what can be gleaned from LIME and SHAP, popular data-centric explanation tools in Artificial Intelligence (AI).We compare and contrast these methods, and show that data-centric methods may yield brittle explanations of limited practical utility.The model-centric framework, however, can offer actionable insights into risks of using AI models in practice. For critical applications of AI, split-second decision making is best informed by robust explanations that are invariant to properties of data, the capability offered by model-centric frameworks.
Cite
Text
Morales et al. "Actionable Model-Centric Explanations (Student Abstract)." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I11.21646Markdown
[Morales et al. "Actionable Model-Centric Explanations (Student Abstract)." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/morales2022aaai-actionable/) doi:10.1609/AAAI.V36I11.21646BibTeX
@inproceedings{morales2022aaai-actionable,
title = {{Actionable Model-Centric Explanations (Student Abstract)}},
author = {Morales, Cecilia G. and Gisolfi, Nicholas and Edman, Robert and Miller, James Kyle and Dubrawski, Artur},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2022},
pages = {13019-13020},
doi = {10.1609/AAAI.V36I11.21646},
url = {https://mlanthology.org/aaai/2022/morales2022aaai-actionable/}
}