Adversarial Robustness Is at Odds with Lazy Training
Abstract
Recent works show that adversarial examples exist for random neural networks [Daniely and Schacham, 2020] and that these examples can be found using a single step of gradient ascent [Bubeck et al., 2021]. In this work, we extend this line of work to ``lazy training'' of neural networks -- a dominant model in deep learning theory in which neural networks are provably efficiently learnable. We show that over-parametrized neural networks that are guaranteed to generalize well and enjoy strong computational guarantees remain vulnerable to attacks generated using a single step of gradient ascent.
Cite
Text
Wang et al. "Adversarial Robustness Is at Odds with Lazy Training." Neural Information Processing Systems, 2022.Markdown
[Wang et al. "Adversarial Robustness Is at Odds with Lazy Training." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/wang2022neurips-adversarial/)BibTeX
@inproceedings{wang2022neurips-adversarial,
title = {{Adversarial Robustness Is at Odds with Lazy Training}},
author = {Wang, Yunjuan and Ullah, Enayat and Mianjy, Poorya and Arora, Raman},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/wang2022neurips-adversarial/}
}