ROPUST: Improving Robustness Through Fine-Tuning with Photonic Processors and Synthetic Gradients

Abstract

Robustness to adversarial attacks is typically obtained through expensive adversarial training with Projected Gradient Descent. We introduce ROPUST, a remarkably simple and efficient method to leverage robust pre-trained models and further increase their robustness, at no cost in natural accuracy. Our technique relies on the use of an Optical Processing Unit (OPU), a photonic co-processor, and a fine-tuning step performed with Direct Feedback Alignment, a synthetic gradient training scheme. We test our method on nine different models against four attacks in RobustBench, consistently improving over state-of-the-art performance. We also introduce phase retrieval attacks, specifically designed to target our own defense. We show that even with state-of-the-art phase retrieval techniques, ROPUST is effective.

Cite

Text

Cappelli et al. "ROPUST: Improving Robustness Through Fine-Tuning with Photonic Processors and Synthetic Gradients." ICML 2021 Workshops: AML, 2021.

Markdown

[Cappelli et al. "ROPUST: Improving Robustness Through Fine-Tuning with Photonic Processors and Synthetic Gradients." ICML 2021 Workshops: AML, 2021.](https://mlanthology.org/icmlw/2021/cappelli2021icmlw-ropust/)

BibTeX

@inproceedings{cappelli2021icmlw-ropust,
  title     = {{ROPUST: Improving Robustness Through Fine-Tuning with Photonic Processors and Synthetic Gradients}},
  author    = {Cappelli, Alessandro and Ohana, Ruben and Launay, Julien and Meunier, Laurent and Poli, Iacopo},
  booktitle = {ICML 2021 Workshops: AML},
  year      = {2021},
  url       = {https://mlanthology.org/icmlw/2021/cappelli2021icmlw-ropust/}
}