Invariance-Inducing Regularization Using Worst-Case Transformations Suffices to Boost Accuracy and Spatial Robustness
Abstract
This work provides theoretical and empirical evidence that invariance-inducing regularizers can increase predictive accuracy for worst-case spatial transformations (spatial robustness). Evaluated on these adversarially transformed examples, standard and adversarial training with such regularizers achieves a relative error reduction of 20% for CIFAR-10 with the same computational budget. This even surpasses handcrafted spatial-equivariant networks. Furthermore, we observe for SVHN, known to have inherent variance in orientation, that robust training also improves standard accuracy on the test set. We prove that this no-trade-off phenomenon holds for adversarial examples from transformation groups.
Cite
Text
Yang et al. "Invariance-Inducing Regularization Using Worst-Case Transformations Suffices to Boost Accuracy and Spatial Robustness." Neural Information Processing Systems, 2019.Markdown
[Yang et al. "Invariance-Inducing Regularization Using Worst-Case Transformations Suffices to Boost Accuracy and Spatial Robustness." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/yang2019neurips-invarianceinducing/)BibTeX
@inproceedings{yang2019neurips-invarianceinducing,
title = {{Invariance-Inducing Regularization Using Worst-Case Transformations Suffices to Boost Accuracy and Spatial Robustness}},
author = {Yang, Fanny and Wang, Zuowen and Heinze-Deml, Christina},
booktitle = {Neural Information Processing Systems},
year = {2019},
pages = {14785-14796},
url = {https://mlanthology.org/neurips/2019/yang2019neurips-invarianceinducing/}
}