Sanity Checks for Explanation Uncertainty
Abstract
Explanations for machine learning models can be hard to interpret or be wrong. Combining an explanation method with an uncertainty estimation method produces explanation uncertainty. Evaluating explanation uncertainty is difficult. In this paper, we propose sanity checks for explanation uncertainty methods, where weight and data randomization tests are defined for explanations with uncertainty, allowing for quick tests for combinations of uncertainty and explanation methods. We experimentally show the validity and effectiveness of these tests on the CIFAR10 and California Housing datasets, noting that Ensembles seem to consistently pass both tests with Guided Backpropagation, Integrated Gradients, and Local Interpretable Model-agnostic Explanation methods.
Cite
Text
Valdenegro-Toro and Mulye. "Sanity Checks for Explanation Uncertainty." European Conference on Computer Vision Workshops, 2024. doi:10.1007/978-3-031-91585-7_16Markdown
[Valdenegro-Toro and Mulye. "Sanity Checks for Explanation Uncertainty." European Conference on Computer Vision Workshops, 2024.](https://mlanthology.org/eccvw/2024/valdenegrotoro2024eccvw-sanity/) doi:10.1007/978-3-031-91585-7_16BibTeX
@inproceedings{valdenegrotoro2024eccvw-sanity,
title = {{Sanity Checks for Explanation Uncertainty}},
author = {Valdenegro-Toro, Matias and Mulye, Mihir},
booktitle = {European Conference on Computer Vision Workshops},
year = {2024},
pages = {255-270},
doi = {10.1007/978-3-031-91585-7_16},
url = {https://mlanthology.org/eccvw/2024/valdenegrotoro2024eccvw-sanity/}
}