Identifying Adversarially Attackable and Robust Samples

Abstract

Adversarial attacks insert small, imperceptible perturbations to input samples that cause large, undesired changes to the output of deep learning models. Despite extensive research on generating adversarial attacks and building defense systems, there has been limited research on understanding adversarial attacks from an input-data perspective. This work introduces the notion of sample attackability, where we aim to identify samples that are most susceptible to adversarial attacks (attackable samples) and conversely also identify the least susceptible samples (robust samples). We propose a deep-learning-based detector to identify the adversarially attackable and robust samples in an unseen dataset for an unseen target model. Experiments on standard image classification datasets enables us to assess the portability of the deep attackability detector across a range of architectures. We find that the deep attackability detector performs better than simple model uncertainty-based measures for identifying the attackable/robust samples. This suggests that uncertainty is an inadequate proxy for measuring sample distance to a decision boundary. In addition to better understanding adversarial attack theory, it is found that the ability to identify the adversarially attackable and robust samples has implications for improving the efficiency of sample-selection tasks.

Cite

Text

Raina and Gales. "Identifying Adversarially Attackable and Robust Samples." ICML 2023 Workshops: AdvML-Frontiers, 2023.

Markdown

[Raina and Gales. "Identifying Adversarially Attackable and Robust Samples." ICML 2023 Workshops: AdvML-Frontiers, 2023.](https://mlanthology.org/icmlw/2023/raina2023icmlw-identifying/)

BibTeX

@inproceedings{raina2023icmlw-identifying,
  title     = {{Identifying Adversarially Attackable and Robust Samples}},
  author    = {Raina, Vyas and Gales, Mark},
  booktitle = {ICML 2023 Workshops: AdvML-Frontiers},
  year      = {2023},
  url       = {https://mlanthology.org/icmlw/2023/raina2023icmlw-identifying/}
}