Generalizing Adversarial Explanations with Grad-CAM

Abstract

Gradient-weighted Class Activation Mapping (Grad-CAM), is an example-based explanation method that provides a gradient activation heat map as an explanation for Convolution Neural Network (CNN) models. The draw-back of this method is that it cannot be used to generalize CNN behaviour. In this paper, we present a novel method that extends Grad-CAM from example-based explanations to a method for explaining global model behaviour. This is achieved by introducing two new metrics, (i) Mean Ob-served Dissimilarity (MOD) and (ii) Variation in Dissimilarity (VID), for model generalization. These metrics are computed by comparing a Normalized Inverted Structural Similarity Index (NISSIM) metric of the Grad-CAM generated heatmap for samples from the original test set and samples from the adversarial test set. For our experiment, we study adversarial attacks on deep models such as VGG16, ResNet50, and ResNet101, and wide models such as In-ceptionNetv3 and XceptionNet using Fast Gradient Sign Method (FGSM). We then compute the metrics MOD and VID for the automatic face recognition (AFR) use case with the VGGFace2 dataset. We observe a consistent shift in the region highlighted in the Grad-CAM heatmap, reflecting its participation to the decision making, across all models under adversarial attacks. The proposed method can be used to understand adversarial attacks and explain the behaviour of black box CNN models for image analysis.

Cite

Text

Chakraborty et al. "Generalizing Adversarial Explanations with Grad-CAM." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022. doi:10.1109/CVPRW56347.2022.00031

Markdown

[Chakraborty et al. "Generalizing Adversarial Explanations with Grad-CAM." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022.](https://mlanthology.org/cvprw/2022/chakraborty2022cvprw-generalizing/) doi:10.1109/CVPRW56347.2022.00031

BibTeX

@inproceedings{chakraborty2022cvprw-generalizing,
  title     = {{Generalizing Adversarial Explanations with Grad-CAM}},
  author    = {Chakraborty, Tanmay and Trehan, Utkarsh and Mallat, Khawla and Dugelay, Jean-Luc},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2022},
  pages     = {186-192},
  doi       = {10.1109/CVPRW56347.2022.00031},
  url       = {https://mlanthology.org/cvprw/2022/chakraborty2022cvprw-generalizing/}
}