Robust Ranking Explanations

Abstract

Robust explanations of machine learning models are critical to establish human trust in the models. Due to limited cognition capability, most humans can only interpret the top few salient features. It is critical to make top salient features robust to adversarial attacks, especially those against the more vulnerable gradient-based explanations. Existing defense measures robustness using $\ell_p$-norms, which have weaker protection power. We define explanation thickness for measuring salient features ranking stability, and derive tractable surrogate bounds of the thickness to design the R2ET algorithm to efficiently maximize the thickness and anchor top salient features. Theoretically, we prove a connection between R2ET and adversarial training. Experiments with a wide spectrum of network architectures and data modalities, including brain networks, demonstrate that R2ET attains higher explanation robustness under stealthy attacks while retaining accuracy.

Cite

Text

Chen et al. "Robust Ranking Explanations." ICML 2023 Workshops: IMLH, 2023.

Markdown

[Chen et al. "Robust Ranking Explanations." ICML 2023 Workshops: IMLH, 2023.](https://mlanthology.org/icmlw/2023/chen2023icmlw-robust/)

BibTeX

@inproceedings{chen2023icmlw-robust,
  title     = {{Robust Ranking Explanations}},
  author    = {Chen, Chao and Guo, Chenghua and Ma, Guixiang and Zeng, Ming and Zhang, Xi and Xie, Sihong},
  booktitle = {ICML 2023 Workshops: IMLH},
  year      = {2023},
  url       = {https://mlanthology.org/icmlw/2023/chen2023icmlw-robust/}
}