VC Classes Are Adversarially Robustly Learnable, but Only Improperly
Abstract
We study the question of learning an adversarially robust predictor. We show that any hypothesis class $\mathcal{H}$ with finite VC dimension is robustly PAC learnable with an \emph{improper} learning rule. The requirement of being improper is necessary as we exhibit examples of hypothesis classes $\mathcal{H}$ with finite VC dimension that are \emph{not} robustly PAC learnable with any \emph{proper} learning rule.
Cite
Text
Montasser et al. "VC Classes Are Adversarially Robustly Learnable, but Only Improperly." Conference on Learning Theory, 2019.Markdown
[Montasser et al. "VC Classes Are Adversarially Robustly Learnable, but Only Improperly." Conference on Learning Theory, 2019.](https://mlanthology.org/colt/2019/montasser2019colt-vc/)BibTeX
@inproceedings{montasser2019colt-vc,
title = {{VC Classes Are Adversarially Robustly Learnable, but Only Improperly}},
author = {Montasser, Omar and Hanneke, Steve and Srebro, Nathan},
booktitle = {Conference on Learning Theory},
year = {2019},
pages = {2512-2530},
volume = {99},
url = {https://mlanthology.org/colt/2019/montasser2019colt-vc/}
}