Provably Robust Cost-Sensitive Learning via Randomized Smoothing

Abstract

We focus on learning adversarially robust classifiers under cost-sensitive scenarios, where the potential harm of different classwise adversarial transformations is encoded in a cost matrix. Existing methods either are empirical that cannot certify robustness or suffer from inherent scalability issues. In this work, we study whether randomized smoothing, a scalable robustness certification framework, can be leveraged to certify cost-sensitive robustness. We first show how to extend the vanilla certification pipeline to provide rigorous guarantees for cost-sensitive robustness. However, when adapting the standard randomized smoothing method to train for cost-sensitive robustness, we observe that the naive reweighting scheme does not achieve a desirable performance due to the indirect optimization of the base classifier. Inspired by this observation, we propose a more direct training method with fine-grained certified radius optimization schemes designed for different data subgroups. Experiments on image benchmarks demonstrate that our method significantly improves certified cost-sensitive robustness without sacrificing overall accuracy.

Cite

Text

Xin et al. "Provably Robust Cost-Sensitive Learning via Randomized Smoothing." ICML 2023 Workshops: AdvML-Frontiers, 2023.

Markdown

[Xin et al. "Provably Robust Cost-Sensitive Learning via Randomized Smoothing." ICML 2023 Workshops: AdvML-Frontiers, 2023.](https://mlanthology.org/icmlw/2023/xin2023icmlw-provably/)

BibTeX

@inproceedings{xin2023icmlw-provably,
  title     = {{Provably Robust Cost-Sensitive Learning via Randomized Smoothing}},
  author    = {Xin, Yuan and Backes, Michael and Zhang, Xiao},
  booktitle = {ICML 2023 Workshops: AdvML-Frontiers},
  year      = {2023},
  url       = {https://mlanthology.org/icmlw/2023/xin2023icmlw-provably/}
}