Probabilistically Robust PAC Learning
Abstract
Recently, Robey et al. propose a notion of probabilistic robustness, which, at a high-level, requires a classifier to be robust to most but not all perturbations. They show that for certain hypothesis classes where proper learning under worst-case robustness is \textit{not} possible, proper learning under probabilistic robustness \textit{is} possible with sample complexity exponentially smaller than in the worst-case robustness setting. This motivates the question of whether proper learning under probabilistic robustness is always possible. In this paper, we show that this is \textit{not} the case. We exhibit examples of hypothesis classes $\mathcal{H}$ with finite VC dimension that are \textit{not} probabilistically robustly PAC learnable with \textit{any} proper learning rule.
Cite
Text
Raman et al. "Probabilistically Robust PAC Learning." NeurIPS 2022 Workshops: MLSW, 2022.Markdown
[Raman et al. "Probabilistically Robust PAC Learning." NeurIPS 2022 Workshops: MLSW, 2022.](https://mlanthology.org/neuripsw/2022/raman2022neuripsw-probabilistically/)BibTeX
@inproceedings{raman2022neuripsw-probabilistically,
title = {{Probabilistically Robust PAC Learning}},
author = {Raman, Vinod and Tewari, Ambuj and Subedi, Unique},
booktitle = {NeurIPS 2022 Workshops: MLSW},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/raman2022neuripsw-probabilistically/}
}