MACER: Attack-Free and Scalable Robust Training via Maximizing Certified Radius

Abstract

Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly. In this paper, we propose the MACER algorithm, which learns robust models without using adversarial training but performs better than all existing provable l2-defenses. Recent work shows that randomized smoothing can be used to provide a certified l2 radius to smoothed classifiers, and our algorithm trains provably robust smoothed classifiers via MAximizing the CErtified Radius (MACER). The attack-free characteristic makes MACER faster to train and easier to optimize. In our experiments, we show that our method can be applied to modern deep neural networks on a wide range of datasets, including Cifar-10, ImageNet, MNIST, and SVHN. For all tasks, MACER spends less training time than state-of-the-art adversarial training algorithms, and the learned models achieve larger average certified radius.

Cite

Text

Zhai et al. "MACER: Attack-Free and Scalable Robust Training via Maximizing Certified Radius." International Conference on Learning Representations, 2020.

Markdown

[Zhai et al. "MACER: Attack-Free and Scalable Robust Training via Maximizing Certified Radius." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/zhai2020iclr-macer/)

BibTeX

@inproceedings{zhai2020iclr-macer,
  title     = {{MACER: Attack-Free and Scalable Robust Training via Maximizing Certified Radius}},
  author    = {Zhai, Runtian and Dan, Chen and He, Di and Zhang, Huan and Gong, Boqing and Ravikumar, Pradeep and Hsieh, Cho-Jui and Wang, Liwei},
  booktitle = {International Conference on Learning Representations},
  year      = {2020},
  url       = {https://mlanthology.org/iclr/2020/zhai2020iclr-macer/}
}