Adversarially Robust PAC Learnability of Real-Valued Functions
Abstract
We study robustness to test-time adversarial attacks in the regression setting with $\ell_p$ losses and arbitrary perturbation sets. We address the question of which function classes are PAC learnable in this setting. We show that classes of finite fat-shattering dimension are learnable in both the realizable and agnostic settings. Moreover, for convex function classes, they are even properly learnable. In contrast, some non-convex function classes provably require improper learning algorithms. Our main technique is based on a construction of an adversarially robust sample compression scheme of a size determined by the fat-shattering dimension. Along the way, we introduce a novel agnostic sample compression scheme for real-valued functions, which may be of independent interest.
Cite
Text
Attias and Hanneke. "Adversarially Robust PAC Learnability of Real-Valued Functions." International Conference on Machine Learning, 2023.Markdown
[Attias and Hanneke. "Adversarially Robust PAC Learnability of Real-Valued Functions." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/attias2023icml-adversarially/)BibTeX
@inproceedings{attias2023icml-adversarially,
title = {{Adversarially Robust PAC Learnability of Real-Valued Functions}},
author = {Attias, Idan and Hanneke, Steve},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {1172-1199},
volume = {202},
url = {https://mlanthology.org/icml/2023/attias2023icml-adversarially/}
}