Reducing Adversarially Robust Learning to Non-Robust PAC Learning
Abstract
We study the problem of reducing adversarially robust learning to standard PAC learning, i.e. the complexity of learning adversarially robust predictors using access to only a black-box non-robust learner. We give a reduction that can robustly learn any hypothesis class C using any non-robust learner A for C. The number of calls to A depends logarithmically on the number of allowed adversarial perturbations per example, and we give a lower bound showing this is unavoidable.
Cite
Text
Montasser et al. "Reducing Adversarially Robust Learning to Non-Robust PAC Learning." Neural Information Processing Systems, 2020.Markdown
[Montasser et al. "Reducing Adversarially Robust Learning to Non-Robust PAC Learning." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/montasser2020neurips-reducing/)BibTeX
@inproceedings{montasser2020neurips-reducing,
title = {{Reducing Adversarially Robust Learning to Non-Robust PAC Learning}},
author = {Montasser, Omar and Hanneke, Steve and Srebro, Nati},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/montasser2020neurips-reducing/}
}