Training for Stable Explanation for Free
Abstract
To foster trust in machine learning models, explanations must be faithful and stable for consistent insights. Existing relevant works rely on the $\ell_p$ distance for stability assessment, which diverges from human perception. Besides, existing adversarial training (AT) associated with intensive computations may lead to an arms race. To address these challenges, we introduce a novel metric to assess the stability of top-$k$ salient features. We introduce R2ET which trains for stable explanation by efficient and effective regularizer,and analyze R2ET by multi-objective optimization to prove numerical and statistical stability of explanations. Moreover, theoretical connections between R2ET and certified robustness justify R2ET's stability in all attacks. Extensive experiments across various data modalities and model architectures show that R2ET achieves superior stability against stealthy attacks, and generalizes effectively across different explanation methods. The code can be found at https://github.com/ccha005/R2ET.
Cite
Text
Chen et al. "Training for Stable Explanation for Free." Neural Information Processing Systems, 2024. doi:10.52202/079017-0113Markdown
[Chen et al. "Training for Stable Explanation for Free." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/chen2024neurips-training/) doi:10.52202/079017-0113BibTeX
@inproceedings{chen2024neurips-training,
title = {{Training for Stable Explanation for Free}},
author = {Chen, Chao and Guo, Chenghua and Chen, Rufeng and Ma, Guixiang and Zeng, Ming and Liao, Xiangwen and Zhang, Xi and Xie, Sihong},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-0113},
url = {https://mlanthology.org/neurips/2024/chen2024neurips-training/}
}