Perturbing Across the Feature Hierarchy to Improve Standard and Strict Blackbox Attack Transferability
Abstract
We consider the blackbox transfer-based targeted adversarial attack threat model in the realm of deep neural network (DNN) image classifiers. Rather than focusing on crossing decision boundaries at the output layer of the source model, our method perturbs representations throughout the extracted feature hierarchy to resemble other classes. We design a flexible attack framework that allows for multi-layer perturbations and demonstrates state-of-the-art targeted transfer performance between ImageNet DNNs. We also show the superiority of our feature space methods under a relaxation of the common assumption that the source and target models are trained on the same dataset and label space, in some instances achieving a $10\times$ increase in targeted success rate relative to other blackbox transfer methods. Finally, we analyze why the proposed methods outperform existing attack strategies and show an extension of the method in the case when limited queries to the blackbox model are allowed.
Cite
Text
Inkawhich et al. "Perturbing Across the Feature Hierarchy to Improve Standard and Strict Blackbox Attack Transferability." Neural Information Processing Systems, 2020.Markdown
[Inkawhich et al. "Perturbing Across the Feature Hierarchy to Improve Standard and Strict Blackbox Attack Transferability." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/inkawhich2020neurips-perturbing/)BibTeX
@inproceedings{inkawhich2020neurips-perturbing,
title = {{Perturbing Across the Feature Hierarchy to Improve Standard and Strict Blackbox Attack Transferability}},
author = {Inkawhich, Nathan and Liang, Kevin and Wang, Binghui and Inkawhich, Matthew and Carin, Lawrence and Chen, Yiran},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/inkawhich2020neurips-perturbing/}
}