Adversarially Robust Learning: A Generic Minimax Optimal Learner and Characterization
Abstract
We present a minimax optimal learner for the problem of learning predictors robust to adversarial examples at test-time. Interestingly, we find that this requires new algorithmic ideas and approaches to adversarially robust learning. In particular, we show, in a strong negative sense, the suboptimality of the robust learner proposed by Montasser, Hanneke, and Srebro [2019] and a broader family of learners we identify as local learners. Our results are enabled by adopting a global perspective, specifically, through a key technical contribution: the the global one-inclusion graph, which may be of independent interest, that generalizes the classical one-inclusion graph due to Haussler, Littlestone, and Warmuth [1994]. Finally, as a byproduct, we identify a dimension characterizing qualitatively and quantitatively what classes of predictors $\mathcal{H}$ are robustly learnable. This resolves an open problem due to Montasser et al. [2019], and closes a (potentially) infinite gap between the established upper and lower bounds on the sample complexity of adversarially robust learning.
Cite
Text
Montasser et al. "Adversarially Robust Learning: A Generic Minimax Optimal Learner and Characterization." Neural Information Processing Systems, 2022.Markdown
[Montasser et al. "Adversarially Robust Learning: A Generic Minimax Optimal Learner and Characterization." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/montasser2022neurips-adversarially/)BibTeX
@inproceedings{montasser2022neurips-adversarially,
title = {{Adversarially Robust Learning: A Generic Minimax Optimal Learner and Characterization}},
author = {Montasser, Omar and Hanneke, Steve and Srebro, Nati},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/montasser2022neurips-adversarially/}
}