Boosting Decision-Based Black-Box Adversarial Attacks with Random Sign Flip
Abstract
Decision-based black-box adversarial attacks (decision-based attack) pose a severe threat to current deep neural networks, as they only need the predicted label of the target model to craft adversarial examples. However, existing decision-based attacks perform poorly on the $ l_\infty $ setting and the required enormous queries cast a shadow over the practicality. In this paper, we show that just randomly flipping the signs of a small number of entries in adversarial perturbations can significantly boost the attack performance. We name this simple and highly efficient decision-based $ l_\infty $ attack as Sign Flip Attack. Extensive experiments on CIFAR-10 and ImageNet show that the proposed method outperforms existing decision-based attacks by large margins and can serve as a strong baseline to evaluate the robustness of defensive models. We further demonstrate the applicability of the proposed method on real-world systems.
Cite
Text
Chen et al. "Boosting Decision-Based Black-Box Adversarial Attacks with Random Sign Flip." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58555-6_17Markdown
[Chen et al. "Boosting Decision-Based Black-Box Adversarial Attacks with Random Sign Flip." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/chen2020eccv-boosting/) doi:10.1007/978-3-030-58555-6_17BibTeX
@inproceedings{chen2020eccv-boosting,
title = {{Boosting Decision-Based Black-Box Adversarial Attacks with Random Sign Flip}},
author = {Chen, Weilun and Zhang, Zhaoxiang and Hu, Xiaolin and Wu, Baoyuan},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58555-6_17},
url = {https://mlanthology.org/eccv/2020/chen2020eccv-boosting/}
}