BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations
Abstract
Given the pressing need for assuring algorithmic transparency, Explainable AI (XAI) has emerged as one of the key areas of AI research. In this paper, we develop a novel Bayesian extension to the LIME framework, one of the most widely used approaches in XAI – which we call BayLIME. Compared to LIME, BayLIME exploits prior knowledge and Bayesian reasoning to improve both the consistency in repeated explanations of a single prediction and the robustness to kernel settings. BayLIME also exhibits better explanation fidelity than the state-of-the-art (LIME, SHAP and GradCAM) by its ability to integrate prior knowledge from, e.g., a variety of other XAI techniques, as well as verification and validation (V&V) methods. We demonstrate the desirable properties of BayLIME through both theoretical analysis and extensive experiments.
Cite
Text
Zhao et al. "BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations." Uncertainty in Artificial Intelligence, 2021.Markdown
[Zhao et al. "BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations." Uncertainty in Artificial Intelligence, 2021.](https://mlanthology.org/uai/2021/zhao2021uai-baylime/)BibTeX
@inproceedings{zhao2021uai-baylime,
title = {{BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations}},
author = {Zhao, Xingyu and Huang, Wei and Huang, Xiaowei and Robu, Valentin and Flynn, David},
booktitle = {Uncertainty in Artificial Intelligence},
year = {2021},
pages = {887-896},
volume = {161},
url = {https://mlanthology.org/uai/2021/zhao2021uai-baylime/}
}