Stratified Adversarial Robustness with Rejection

Abstract

Recently, there is an emerging interest in adversarially training a classifier with a rejection option (also known as a selective classifier) for boosting adversarial robustness. While rejection can incur a cost in many applications, existing studies typically associate zero cost with rejecting perturbed inputs, which can result in the rejection of numerous slightly-perturbed inputs that could be correctly classified. In this work, we study adversarially-robust classification with rejection in the stratified rejection setting, where the rejection cost is modeled by rejection loss functions monotonically non-increasing in the perturbation magnitude. We theoretically analyze the stratified rejection setting and propose a novel defense method – Adversarial Training with Consistent Prediction-based Rejection (CPR) – for building a robust selective classifier. Experiments on image datasets demonstrate that the proposed method significantly outperforms existing methods under strong adaptive attacks. For instance, on CIFAR-10, CPR reduces the total robust loss (for different rejection losses) by at least 7.3% under both seen and unseen attacks.

Cite

Text

Chen et al. "Stratified Adversarial Robustness with Rejection." International Conference on Machine Learning, 2023.

Markdown

[Chen et al. "Stratified Adversarial Robustness with Rejection." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/chen2023icml-stratified/)

BibTeX

@inproceedings{chen2023icml-stratified,
  title     = {{Stratified Adversarial Robustness with Rejection}},
  author    = {Chen, Jiefeng and Raghuram, Jayaram and Choi, Jihye and Wu, Xi and Liang, Yingyu and Jha, Somesh},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {4867-4894},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/chen2023icml-stratified/}
}