Adversarial Training and Provable Robustness: A Tale of Two Objectives

Abstract

We propose a principled framework that combines adversarial training and provable robustness verification for training certifiably robust neural networks. We formulate the training problem as a joint optimization problem with both empirical and provable robustness objectives and develop a novel gradient-descent technique that can eliminate bias in stochastic multi-gradients. We perform both theoretical analysis on the convergence of the proposed technique and experimental comparison with state-of-the-arts. Results on MNIST and CIFAR-10 show that our method can consistently match or outperform prior approaches for provable l∞ robustness. Notably, we achieve 6.60% verified test error on MNIST at ε = 0.3, and 66.57% on CIFAR-10 with ε = 8/255.

Cite

Text

Fan and Li. "Adversarial Training and Provable Robustness: A Tale of Two Objectives." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I8.16904

Markdown

[Fan and Li. "Adversarial Training and Provable Robustness: A Tale of Two Objectives." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/fan2021aaai-adversarial/) doi:10.1609/AAAI.V35I8.16904

BibTeX

@inproceedings{fan2021aaai-adversarial,
  title     = {{Adversarial Training and Provable Robustness: A Tale of Two Objectives}},
  author    = {Fan, Jiameng and Li, Wenchao},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {7367-7376},
  doi       = {10.1609/AAAI.V35I8.16904},
  url       = {https://mlanthology.org/aaai/2021/fan2021aaai-adversarial/}
}