IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients
Abstract
Integrated Gradients (IG) as well as its variants are well-known techniques for interpreting the decisions of deep neural networks. While IG-based approaches attain state-of-the-art performance, they often integrate noise into their explanation saliency maps, which reduce their interpretability. To minimize the noise, we examine the source of the noise analytically and propose a new approach to reduce the explanation noise based on our analytical findings. We propose the Important Direction Gradient Integration (IDGI) framework, which can be easily incorporated into any IG-based method that uses the Reimann Integration for integrated gradient computation. Extensive experiments with three IG-based methods show that IDGI improves them drastically on numerous interpretability metrics.
Cite
Text
Yang et al. "IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.02272Markdown
[Yang et al. "IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/yang2023cvpr-idgi/) doi:10.1109/CVPR52729.2023.02272BibTeX
@inproceedings{yang2023cvpr-idgi,
title = {{IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients}},
author = {Yang, Ruo and Wang, Binghui and Bilgic, Mustafa},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2023},
pages = {23725-23734},
doi = {10.1109/CVPR52729.2023.02272},
url = {https://mlanthology.org/cvpr/2023/yang2023cvpr-idgi/}
}