Adversarial Eigen Attack on Black-Box Models
Abstract
Black-box adversarial attack has aroused much research attention for its difficulty on nearly no available information of the attacked model and the additional constraint on the query budget. A common way to improve attack efficiency is to transfer the gradient information of a white-box substitute model trained on an extra dataset. In this paper, we deal with a more practical setting where a pre-trained white-box model with network parameters is provided without extra training data. To solve the model mismatch problem between the white-box and black-box models, we propose a novel algorithm EigenBA by systematically integrating gradient-based white-box method and zeroth-order optimization in black-box methods. We theoretically show the optimal directions of perturbations for each step are closely related to the right singular vectors of the Jacobian matrix of the pretrained white-box model. Extensive experiments on ImageNet, CIFAR-10 and WebVision show that EigenBA can consistently and significantly outperform state-of-the-art baselines in terms of success rate and attack efficiency.
Cite
Text
Zhou et al. "Adversarial Eigen Attack on Black-Box Models." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.01482Markdown
[Zhou et al. "Adversarial Eigen Attack on Black-Box Models." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/zhou2022cvpr-adversarial/) doi:10.1109/CVPR52688.2022.01482BibTeX
@inproceedings{zhou2022cvpr-adversarial,
title = {{Adversarial Eigen Attack on Black-Box Models}},
author = {Zhou, Linjun and Cui, Peng and Zhang, Xingxuan and Jiang, Yinan and Yang, Shiqiang},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2022},
pages = {15254-15262},
doi = {10.1109/CVPR52688.2022.01482},
url = {https://mlanthology.org/cvpr/2022/zhou2022cvpr-adversarial/}
}