Efficient and Rigorous Model-Agnostic Explanations
Abstract
Explainable artificial intelligence (XAI) is at the core of trustworthy AI. The best-known methods of XAI are sub-symbolic. Unfortunately, these methods do not give guarantees of rigor. Logic-based XAI addresses the lack of rigor of sub-symbolic methods, but in turn it exhibits some drawbacks. These include scalability, explanation size, but also the need to access the details of the machine learning model. Furthermore, access to the details of an ML model may reveal sensitive information. This paper builds on recent work on symbolic model-agnostic XAI, which is based on explaining samples of behavior of a blackbox ML model, and proposes efficient algorithms for the computation of explanations. The experiments confirm the scalability of the novel algorithms.
Cite
Text
Marques-Silva et al. "Efficient and Rigorous Model-Agnostic Explanations." International Joint Conference on Artificial Intelligence, 2025. doi:10.24963/IJCAI.2025/294Markdown
[Marques-Silva et al. "Efficient and Rigorous Model-Agnostic Explanations." International Joint Conference on Artificial Intelligence, 2025.](https://mlanthology.org/ijcai/2025/marquessilva2025ijcai-efficient/) doi:10.24963/IJCAI.2025/294BibTeX
@inproceedings{marquessilva2025ijcai-efficient,
title = {{Efficient and Rigorous Model-Agnostic Explanations}},
author = {Marques-Silva, João and Lefebre-Lobaina, Jairo A. and Martinez, Maria Vanina},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2025},
pages = {2637-2646},
doi = {10.24963/IJCAI.2025/294},
url = {https://mlanthology.org/ijcai/2025/marquessilva2025ijcai-efficient/}
}