Towards Feature Space Adversarial Attack by Style Perturbation
Abstract
We propose a new adversarial attack to Deep Neural Networks for image classification. Different from most existing attacks that directly perturb input pixels, our attack focuses on perturbing abstract features, more specifically, features that denote styles, including interpretable styles such as vivid colors and sharp outlines, and uninterpretable ones. It induces model misclassification by injecting imperceptible style changes through an optimization procedure. We show that our attack can generate adversarial samples that are more natural-looking than the state-of-the-art unbounded attacks. The experiment also supports that existing pixel-space adversarial attack detection and defense techniques can hardly ensure robustness in the style-related feature space.
Cite
Text
Xu et al. "Towards Feature Space Adversarial Attack by Style Perturbation." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I12.17259Markdown
[Xu et al. "Towards Feature Space Adversarial Attack by Style Perturbation." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/xu2021aaai-feature/) doi:10.1609/AAAI.V35I12.17259BibTeX
@inproceedings{xu2021aaai-feature,
title = {{Towards Feature Space Adversarial Attack by Style Perturbation}},
author = {Xu, Qiuling and Tao, Guanhong and Cheng, Siyuan and Zhang, Xiangyu},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2021},
pages = {10523-10531},
doi = {10.1609/AAAI.V35I12.17259},
url = {https://mlanthology.org/aaai/2021/xu2021aaai-feature/}
}