DPG: A Model to Build Feature Subspace Against Adversarial Patch Attack
Abstract
Adversarial patch attacks in the physical world are a major threat to the application of deep learning. However, current research on adversarial patch defense algorithms focuses on image pre-processing defenses, it has been demonstrated that this defense reduces the classification accuracy of clean images and is unable to defend against physically realizable attacks. In this paper, we propose a defense patch GNN (DPG), using a new perspective for defending against adversarial patch attacks. First, we extract the input image features with the feature extraction to obtain a feature set. Then downsampling the feature set by applying the global average pooling layer to reduce the perturbation of the features by the adversarial patch. Finally, this paper proposes a graph-structured feature subspace to robust the feature performance. In addition, we design an optimization algorithm based on stochastic gradient descent (SGD), which can significantly increase the mode’s generalization ability. We demonstrate empirically the superior robustness of the DPG model on existing adversarial patch attacks. DPG shows without any accuracy loss in the prediction of clean images.
Cite
Text
Xue et al. "DPG: A Model to Build Feature Subspace Against Adversarial Patch Attack." Machine Learning, 2024. doi:10.1007/S10994-023-06417-7Markdown
[Xue et al. "DPG: A Model to Build Feature Subspace Against Adversarial Patch Attack." Machine Learning, 2024.](https://mlanthology.org/mlj/2024/xue2024mlj-dpg/) doi:10.1007/S10994-023-06417-7BibTeX
@article{xue2024mlj-dpg,
title = {{DPG: A Model to Build Feature Subspace Against Adversarial Patch Attack}},
author = {Xue, Yunsheng and Wen, Mi and He, Wei and Li, Weiwei},
journal = {Machine Learning},
year = {2024},
pages = {5601-5622},
doi = {10.1007/S10994-023-06417-7},
volume = {113},
url = {https://mlanthology.org/mlj/2024/xue2024mlj-dpg/}
}