Agnostic Multi-Robust Learning Using ERM

Abstract

A fundamental problem in robust learning is asymmetry: a learner needs to correctly classify every one of exponentially-many perturbations that an adversary might make to a test-time natural example. In contrast, the attacker only needs to find one successful perturbation. Xiang et al.[2022] proposed an algorithm that in the context of patch attacks for image classification, reduces the effective number of perturbations from an exponential to a polynomial number of perturbations and learns using an ERM oracle. However, to achieve its guarantee, their algorithm requires the natural examples to be robustly realizable. This prompts the natural question; can we extend their approach to the non-robustly-realizable case where there is no classifier with zero robust error? Our first contribution is to answer this question affirmatively by reducing this problem to a setting in which an algorithm proposed by Feige et al. [2015] can be applied, and in the process extend their guarantees. Next, we extend our results to a multi-group setting and introduce a novel agnostic multi-robust learning problem where the goal is to learn a predictor that achieves low robust loss on a (potentially) rich collection of subgroups.

Cite

Text

Ahmadi et al. "Agnostic Multi-Robust Learning Using ERM." Artificial Intelligence and Statistics, 2024.

Markdown

[Ahmadi et al. "Agnostic Multi-Robust Learning Using ERM." Artificial Intelligence and Statistics, 2024.](https://mlanthology.org/aistats/2024/ahmadi2024aistats-agnostic/)

BibTeX

@inproceedings{ahmadi2024aistats-agnostic,
  title     = {{Agnostic Multi-Robust Learning Using ERM}},
  author    = {Ahmadi, Saba and Blum, Avrim and Montasser, Omar and Stangl, Kevin M},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2024},
  pages     = {2242-2250},
  volume    = {238},
  url       = {https://mlanthology.org/aistats/2024/ahmadi2024aistats-agnostic/}
}