Even if Explanations: Prior Work, Desiderata & Benchmarks for Semi-Factual XAI
Abstract
Recently, eXplainable AI (XAI) research has focused on counterfactual explanations as post-hoc justifications for AI-system decisions (e.g., a customer refused a loan might be told “if you asked for a loan with a shorter term, it would have been approved”). Counterfactuals explain what changes to the input-features of an AI system change the output-decision. However, there is a sub-type of counterfactual, semi-factuals, that have received less attention in AI (though the Cognitive Sciences have studied them more). This paper surveys semi-factual explanation, summarising historical and recent work. It defines key desiderata for semi-factual XAI, reporting benchmark tests of historical algorithms (as well as a novel, naïve method) to provide a solid basis for future developments.
Cite
Text
Aryal and Keane. "Even if Explanations: Prior Work, Desiderata & Benchmarks for Semi-Factual XAI." International Joint Conference on Artificial Intelligence, 2023. doi:10.24963/IJCAI.2023/732Markdown
[Aryal and Keane. "Even if Explanations: Prior Work, Desiderata & Benchmarks for Semi-Factual XAI." International Joint Conference on Artificial Intelligence, 2023.](https://mlanthology.org/ijcai/2023/aryal2023ijcai-even/) doi:10.24963/IJCAI.2023/732BibTeX
@inproceedings{aryal2023ijcai-even,
title = {{Even if Explanations: Prior Work, Desiderata & Benchmarks for Semi-Factual XAI}},
author = {Aryal, Saugat and Keane, Mark T.},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2023},
pages = {6526-6535},
doi = {10.24963/IJCAI.2023/732},
url = {https://mlanthology.org/ijcai/2023/aryal2023ijcai-even/}
}