Characterizing the Optimal $0-1$ Loss for Multi-Class Classification with a Test-Time Attacker

Abstract

Finding classifiers robust to adversarial examples is critical for their safe deployment. Determining the robustness of the best possible classifier under a given threat model for a fixed data distribution and comparing it to that achieved by state-of-the-art training methods is thus an important diagnostic tool. In this paper, we find achievable information-theoretic lower bounds on robust loss in the presence of a test-time attacker for *multi-class classifiers on any discrete dataset*. We provide a general framework for finding the optimal $0-1$ loss that revolves around the construction of a conflict hypergraph from the data and adversarial constraints. The prohibitive cost of this formulation in practice leads us to formulate other variants of the attacker-classifier game that more efficiently determine the range of the optimal loss. Our valuation shows, for the first time, an analysis of the gap to optimal robustness for classifiers in the multi-class setting on benchmark datasets.

Cite

Text

Dai et al. "Characterizing the Optimal $0-1$ Loss for Multi-Class Classification with a Test-Time Attacker." ICML 2023 Workshops: AdvML-Frontiers, 2023.

Markdown

[Dai et al. "Characterizing the Optimal $0-1$ Loss for Multi-Class Classification with a Test-Time Attacker." ICML 2023 Workshops: AdvML-Frontiers, 2023.](https://mlanthology.org/icmlw/2023/dai2023icmlw-characterizing/)

BibTeX

@inproceedings{dai2023icmlw-characterizing,
  title     = {{Characterizing the Optimal $0-1$ Loss for Multi-Class Classification with a Test-Time Attacker}},
  author    = {Dai, Sihui and Ding, Wenxin and Bhagoji, Arjun Nitin and Cullina, Daniel and Zhao, Ben Y. and Zheng, Haitao and Mittal, Prateek},
  booktitle = {ICML 2023 Workshops: AdvML-Frontiers},
  year      = {2023},
  url       = {https://mlanthology.org/icmlw/2023/dai2023icmlw-characterizing/}
}