Globally-Robust Neural Networks
Abstract
The threat of adversarial examples has motivated work on training certifiably robust neural networks to facilitate efficient verification of local robustness at inference time. We formalize a notion of global robustness, which captures the operational properties of on-line local robustness certification while yielding a natural learning objective for robust training. We show that widely-used architectures can be easily adapted to this objective by incorporating efficient global Lipschitz bounds into the network, yielding certifiably-robust models by construction that achieve state-of-the-art verifiable accuracy. Notably, this approach requires significantly less time and memory than recent certifiable training methods, and leads to negligible costs when certifying points on-line; for example, our evaluation shows that it is possible to train a large robust Tiny-Imagenet model in a matter of hours. Our models effectively leverage inexpensive global Lipschitz bounds for real-time certification, despite prior suggestions that tighter local bounds are needed for good performance; we posit this is possible because our models are specifically trained to achieve tighter global bounds. Namely, we prove that the maximum achievable verifiable accuracy for a given dataset is not improved by using a local bound.
Cite
Text
Leino et al. "Globally-Robust Neural Networks." International Conference on Machine Learning, 2021.Markdown
[Leino et al. "Globally-Robust Neural Networks." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/leino2021icml-globallyrobust/)BibTeX
@inproceedings{leino2021icml-globallyrobust,
title = {{Globally-Robust Neural Networks}},
author = {Leino, Klas and Wang, Zifan and Fredrikson, Matt},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {6212-6222},
volume = {139},
url = {https://mlanthology.org/icml/2021/leino2021icml-globallyrobust/}
}