Adversarial Training Can Hurt Generalization
Abstract
While adversarial training can improve robust accuracy (against an adversary), it sometimes hurts standard accuracy (when there is no adversary). Previous work has studied this tradeoff between standard and robust accuracy, but only in the setting where no predictor performs well on both objectives in the infinite data limit. In this paper, we show that even when the optimal predictor with infinite data performs well on both objectives, a tradeoff can still manifest itself with finite data. Furthermore, since our construction is based on a convex learning problem, we rule out optimization concerns, thus laying bare a fundamental tension between robustness and generalization. Finally, we show that robust self-training mostly eliminates this tradeoff by leveraging unlabeled data.
Cite
Text
Raghunathan et al. "Adversarial Training Can Hurt Generalization." ICML 2019 Workshops: Deep_Phenomena, 2019.Markdown
[Raghunathan et al. "Adversarial Training Can Hurt Generalization." ICML 2019 Workshops: Deep_Phenomena, 2019.](https://mlanthology.org/icmlw/2019/raghunathan2019icmlw-adversarial/)BibTeX
@inproceedings{raghunathan2019icmlw-adversarial,
title = {{Adversarial Training Can Hurt Generalization}},
author = {Raghunathan, Aditi and Xie, Sang Michael and Yang, Fanny and Duchi, John and Liang, Percy},
booktitle = {ICML 2019 Workshops: Deep_Phenomena},
year = {2019},
url = {https://mlanthology.org/icmlw/2019/raghunathan2019icmlw-adversarial/}
}