Attacking Deep Networks with Surrogate-Based Adversarial Black-Box Methods Is Easy
Abstract
A recent line of work on black-box adversarial attacks has revived the use of transfer from surrogate models by integrating it into query-based search. However, we find that existing approaches of this type underperform their potential, and can be overly complicated besides. Here, we provide a short and simple algorithm which achieves state-of-the-art results through a search which uses the surrogate network's class-score gradients, with no need for other priors or heuristics. The guiding assumption of the algorithm is that the studied networks are in a fundamental sense learning similar functions, and that a transfer attack from one to the other should thus be fairly "easy". This assumption is validated by the extremely low query counts and failure rates achieved: e.g. an untargeted attack on a VGG-16 ImageNet network using a ResNet-152 as the surrogate yields a median query count of 6 at a success rate of 99.9%. Code is available at https://github.com/fiveai/GFCS.
Cite
Text
Lord et al. "Attacking Deep Networks with Surrogate-Based Adversarial Black-Box Methods Is Easy." International Conference on Learning Representations, 2022.Markdown
[Lord et al. "Attacking Deep Networks with Surrogate-Based Adversarial Black-Box Methods Is Easy." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/lord2022iclr-attacking/)BibTeX
@inproceedings{lord2022iclr-attacking,
title = {{Attacking Deep Networks with Surrogate-Based Adversarial Black-Box Methods Is Easy}},
author = {Lord, Nicholas A. and Mueller, Romain and Bertinetto, Luca},
booktitle = {International Conference on Learning Representations},
year = {2022},
url = {https://mlanthology.org/iclr/2022/lord2022iclr-attacking/}
}