Boosting Barely Robust Learners: A New Perspective on Adversarial Robustness

Abstract

We present an oracle-efficient algorithm for boosting the adversarial robustness of barely robust learners. Barely robust learning algorithms learn predictors that are adversarially robust only on a small fraction $\beta \ll 1$ of the data distribution. Our proposed notion of barely robust learning requires robustness with respect to a ``larger'' perturbation set; which we show is necessary for strongly robust learning, and that weaker relaxations are not sufficient for strongly robust learning. Our results reveal a qualitative and quantitative equivalence between two seemingly unrelated problems: strongly robust learning and barely robust learning.

Cite

Text

Blum et al. "Boosting Barely Robust Learners: A New Perspective on Adversarial Robustness." Neural Information Processing Systems, 2022.

Markdown

[Blum et al. "Boosting Barely Robust Learners: A New Perspective on Adversarial Robustness." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/blum2022neurips-boosting/)

BibTeX

@inproceedings{blum2022neurips-boosting,
  title     = {{Boosting Barely Robust Learners: A New Perspective on Adversarial Robustness}},
  author    = {Blum, Avrim and Montasser, Omar and Shakhnarovich, Greg and Zhang, Hongyang},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/blum2022neurips-boosting/}
}