Generating Interpretable Counterfactual Explanations by Implicit Minimisation of Epistemic and Aleatoric Uncertainties
Abstract
Counterfactual explanations (CEs) are a practical tool for demonstrating why machine learning classifiers make particular decisions. For CEs to be useful, it is important that they are easy for users to interpret. Existing methods for generating interpretable CEs rely on auxiliary generative models, which may not be suitable for complex datasets, and incur engineering overhead. We introduce a simple and fast method for generating interpretable CEs in a white-box setting without an auxiliary model, by using the predictive uncertainty of the classifier. Our experiments show that our proposed algorithm generates more interpretable CEs, according to IM1 scores (Van Looveren et al., 2019), than existing methods. Additionally, our approach allows us to estimate the uncertainty of a CE, which may be important in safety-critical applications, such as those in the medical domain.
Cite
Text
Schut et al. "Generating Interpretable Counterfactual Explanations by Implicit Minimisation of Epistemic and Aleatoric Uncertainties." Artificial Intelligence and Statistics, 2021.Markdown
[Schut et al. "Generating Interpretable Counterfactual Explanations by Implicit Minimisation of Epistemic and Aleatoric Uncertainties." Artificial Intelligence and Statistics, 2021.](https://mlanthology.org/aistats/2021/schut2021aistats-generating/)BibTeX
@inproceedings{schut2021aistats-generating,
title = {{Generating Interpretable Counterfactual Explanations by Implicit Minimisation of Epistemic and Aleatoric Uncertainties}},
author = {Schut, Lisa and Key, Oscar and Mc Grath, Rory and Costabello, Luca and Sacaleanu, Bogdan and Corcoran, Medb and Gal, Yarin},
booktitle = {Artificial Intelligence and Statistics},
year = {2021},
pages = {1756-1764},
volume = {130},
url = {https://mlanthology.org/aistats/2021/schut2021aistats-generating/}
}