How to Choose Your Best Allies for a Transferable Attack?

Abstract

The transferability of adversarial examples is a key issue in the security of deep neural networks. The possibility of an adversarial example crafted for a source model fooling another targeted model makes the threat of adversarial attacks more realistic. Measuring transferability is a crucial problem, but the Attack Success Rate alone does not provide a sound evaluation. This paper proposes a new methodology for evaluating transferability by putting distortion in a central position. This new tool shows that transferable attacks may perform far worse than a black box attack if the attacker randomly picks the source model. To address this issue, we propose a new selection mechanism, called FiT, which aims at choosing the best source model with only a few preliminary queries to the target. Our experimental results show that FiT is highly effective at selecting the best source model for multiple scenarios such as single-model attacks, ensemble-model attacks and multiple attacks.

Cite

Text

Maho et al. "How to Choose Your Best Allies for a Transferable Attack?." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.00419

Markdown

[Maho et al. "How to Choose Your Best Allies for a Transferable Attack?." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/maho2023iccv-choose/) doi:10.1109/ICCV51070.2023.00419

BibTeX

@inproceedings{maho2023iccv-choose,
  title     = {{How to Choose Your Best Allies for a Transferable Attack?}},
  author    = {Maho, Thibault and Moosavi-Dezfooli, Seyed-Mohsen and Furon, Teddy},
  booktitle = {International Conference on Computer Vision},
  year      = {2023},
  pages     = {4542-4551},
  doi       = {10.1109/ICCV51070.2023.00419},
  url       = {https://mlanthology.org/iccv/2023/maho2023iccv-choose/}
}