Augmented Lagrangian Adversarial Attacks

Abstract

Adversarial attack algorithms are dominated by penalty methods, which are slow in practice, or more efficient distance-customized methods, which are heavily tailored to the properties of the considered distance. We propose a white-box attack algorithm to generate minimally perturbed adversarial examples based on Augmented Lagrangian principles. We bring several algorithmic modifications, which have a crucial effect on performance. Our attack enjoys the generality of penalty methods and the computational efficiency of distance-customized algorithms, and can be readily used for a wide set of distances. We compare our attack to state-of-the-art methods on three datasets and several models, and consistently obtain competitive performances with similar or lower computational complexity.

Cite

Text

Rony et al. "Augmented Lagrangian Adversarial Attacks." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00764

Markdown

[Rony et al. "Augmented Lagrangian Adversarial Attacks." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/rony2021iccv-augmented/) doi:10.1109/ICCV48922.2021.00764

BibTeX

@inproceedings{rony2021iccv-augmented,
  title     = {{Augmented Lagrangian Adversarial Attacks}},
  author    = {Rony, Jérôme and Granger, Eric and Pedersoli, Marco and Ayed, Ismail Ben},
  booktitle = {International Conference on Computer Vision},
  year      = {2021},
  pages     = {7738-7747},
  doi       = {10.1109/ICCV48922.2021.00764},
  url       = {https://mlanthology.org/iccv/2021/rony2021iccv-augmented/}
}