Boosting Adversarial Transferability with Spatial Adversarial Alignment

Abstract

Deep neural networks are vulnerable to adversarial examples that exhibit transferability across various models. Numerous approaches are proposed to enhance the transferability of adversarial examples, including advanced optimization, data augmentation, and model modifications. However, these methods still show limited transferability, partiovovocularly in cross-architecture scenarios, such as from CNN to ViT. To achieve high transferability, we propose a technique termed Spatial Adversarial Alignment (SAA), which employs an alignment loss and leverages a witness model to fine-tune the surrogate model. Specifically, SAA consists of two key parts: spatial-aware alignment and adversarial-aware alignment. First, we minimize the divergences of features between the two models in both global and local regions, facilitating spatial alignment. Second, we introduce a self-adversarial strategy that leverages adversarial examples to impose further constraints, aligning features from an adversarial perspective. Through this alignment, the surrogate model is trained to concentrate on the common features extracted by the witness model. This facilitates adversarial attacks on these shared features, thereby yielding perturbations that exhibit enhanced transferability. Extensive experiments on various architectures on ImageNet show that aligned surrogate models based on SAA can provide higher transferable adversarial examples, especially in cross-architecture attacks.

Cite

Text

Chen et al. "Boosting Adversarial Transferability with Spatial Adversarial Alignment." Advances in Neural Information Processing Systems, 2025.

Markdown

[Chen et al. "Boosting Adversarial Transferability with Spatial Adversarial Alignment." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/chen2025neurips-boosting/)

BibTeX

@inproceedings{chen2025neurips-boosting,
  title     = {{Boosting Adversarial Transferability with Spatial Adversarial Alignment}},
  author    = {Chen, Zhaoyu and Guo, HaiJing and Jiang, Kaixun and Fu, Jiyuan and Zhou, Xinyu and Yang, Dingkang and Tang, Hao and Li, Bo and Zhang, Wenqiang},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/chen2025neurips-boosting/}
}