Calibrated Surrogate Losses for Adversarially Robust Classification

Abstract

Adversarially robust classification seeks a classifier that is insensitive to adversarial perturbations of test patterns. This problem is often formulated via a minimax objective, where the target loss is the worst-case value of the 0-1 loss subject to a bound on the size of perturbation. Recent work has proposed convex surrogates for the adversarial 0-1 loss, in an effort to make optimization more tractable. In this work, we consider the question of which surrogate losses are \emph{calibrated} with respect to the adversarial 0-1 loss, meaning that minimization of the former implies minimization of the latter. We show that no convex surrogate loss is calibrated with respect to the adversarial 0-1 loss when restricted to the class of linear models. We further introduce a class of nonconvex losses and offer necessary and sufficient conditions for losses in this class to be calibrated.

Cite

Text

Bao et al. "Calibrated Surrogate Losses for Adversarially Robust Classification." Conference on Learning Theory, 2020.

Markdown

[Bao et al. "Calibrated Surrogate Losses for Adversarially Robust Classification." Conference on Learning Theory, 2020.](https://mlanthology.org/colt/2020/bao2020colt-calibrated/)

BibTeX

@inproceedings{bao2020colt-calibrated,
  title     = {{Calibrated Surrogate Losses for Adversarially Robust Classification}},
  author    = {Bao, Han and Scott, Clay and Sugiyama, Masashi},
  booktitle = {Conference on Learning Theory},
  year      = {2020},
  pages     = {408-451},
  volume    = {125},
  url       = {https://mlanthology.org/colt/2020/bao2020colt-calibrated/}
}