Improving Adversarial Transferability via Model Alignment

Abstract

Neural networks are susceptible to adversarial perturbations that are transferable across different models. In this paper, we introduce a novel model alignment technique aimed at improving a given source model’s ability in generating transferable adversarial perturbations. During the alignment process, the parameters of the source model are fine-tuned to minimize an alignment loss. This loss measures the divergence in the predictions between the source model and another, independently trained model, referred to as the witness model. To understand the effect of model alignment, we conduct a geometric analysis of the resulting changes in the loss landscape. Extensive experiments on the ImageNet dataset, using a variety of model architectures, demonstrate that perturbations generated from aligned source models exhibit significantly higher transferability than those from the original source model. Our source code is available at https://github.com/averyma/model-alignment.

Cite

Text

Ma et al. "Improving Adversarial Transferability via Model Alignment." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73033-7_5

Markdown

[Ma et al. "Improving Adversarial Transferability via Model Alignment." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/ma2024eccv-improving/) doi:10.1007/978-3-031-73033-7_5

BibTeX

@inproceedings{ma2024eccv-improving,
  title     = {{Improving Adversarial Transferability via Model Alignment}},
  author    = {Ma, Avery and Farahmand, Amir-massoud and Pan, Yangchen and Torr, Philip and Gu, Jindong},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-73033-7_5},
  url       = {https://mlanthology.org/eccv/2024/ma2024eccv-improving/}
}