Detecting AutoAttack Perturbations in the Frequency Domain

Abstract

Recently, adversarial attacks on image classification networks by the AutoAttack (Croce & Hein, 2020b) framework have drawn a lot of attention. While AutoAttack has shown a very high attack success rate, most defense approaches are focusing on network hardening and robustness enhancements, like adversarial training. This way, the currently best-reported method can withstand ∼ 66% of adversarial examples on CIFAR10. In this paper, we investigate the spatial and frequency domain properties of AutoAttack and propose an alternative defense. Instead of hardening a network, we detect adversarial attacks during inference, rejecting manipulated inputs. Based on a rather simple and fast analysis in the frequency domain, we introduce two different detection algorithms. First, a black box detector which only operates on the input images and achieves a detection accuracy of 100% on the AutoAttack CIFAR10 benchmark and 99.3% on ImageNet, for eps = 8/255 in both cases. Second, a whitebox detector using an analysis of CNN featuremaps, leading to a detection rate of also 100% and 98.7% on the same benchmarks.

Cite

Text

Lorenz et al. "Detecting AutoAttack Perturbations in the Frequency Domain." ICML 2021 Workshops: AML, 2021.

Markdown

[Lorenz et al. "Detecting AutoAttack Perturbations in the Frequency Domain." ICML 2021 Workshops: AML, 2021.](https://mlanthology.org/icmlw/2021/lorenz2021icmlw-detecting/)

BibTeX

@inproceedings{lorenz2021icmlw-detecting,
  title     = {{Detecting AutoAttack Perturbations in the Frequency Domain}},
  author    = {Lorenz, Peter and Harder, Paula and Straßel, Dominik and Keuper, Margret and Keuper, Janis},
  booktitle = {ICML 2021 Workshops: AML},
  year      = {2021},
  url       = {https://mlanthology.org/icmlw/2021/lorenz2021icmlw-detecting/}
}