Enhancing Adversarial Transferability by Balancing Exploration and Exploitation with Gradient-Guided Sampling

Abstract

Adversarial attacks present a critical challenge to deep neural networks' robustness, particularly in transfer scenarios across different model architectures. However, the transferability of adversarial attacks faces a fundamental dilemma between Exploitation (maximizing attack potency) and Exploration (enhancing cross-model generalization). Traditional momentum-based methods over-prioritize Exploitation, i.e., higher loss maxima for attack potency but weakened generalization (narrow loss surface). Conversely, recent methods with inner-iteration sampling over-prioritize Exploration, i.e., flatter loss surfaces for cross-model generalization but weakened attack potency (suboptimal local maxima). To resolve this dilemma, we propose a simple yet effective Gradient-Guided Sampling (GGS), which harmonizes both objectives through guiding sampling along the gradient ascent direction to improve both sampling efficiency and stability. Specifically, based on MI-FGSM, GGS introduces inner-iteration random sampling and guides the sampling direction using the gradient from the previous inner-iteration (the sampling's magnitude is determined by a random distribution). This mechanism encourages adversarial examples to reside in balanced regions with both flatness for cross-model generalization and higher local maxima for strong attack potency. Comprehensive experiments across multiple DNN architectures and multimodal large language models (MLLMs) demonstrate the superiority of our method over state-of-the-art transfer attacks. Code is made available at https://github.com/anuin-cat/GGS.

Cite

Text

Niu et al. "Enhancing Adversarial Transferability by Balancing Exploration and Exploitation with Gradient-Guided Sampling." International Conference on Computer Vision, 2025.

Markdown

[Niu et al. "Enhancing Adversarial Transferability by Balancing Exploration and Exploitation with Gradient-Guided Sampling." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/niu2025iccv-enhancing/)

BibTeX

@inproceedings{niu2025iccv-enhancing,
  title     = {{Enhancing Adversarial Transferability by Balancing Exploration and Exploitation with Gradient-Guided Sampling}},
  author    = {Niu, Zenghao and Xie, Weicheng and Song, Siyang and Yu, Zitong and Liu, Feng and Shen, Linlin},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {3885-3894},
  url       = {https://mlanthology.org/iccv/2025/niu2025iccv-enhancing/}
}