Improving L1-Certified Robustness via Randomized Smoothing by Leveraging Box Constraints

Abstract

Randomized smoothing is a popular method to certify robustness of image classifiers to adversarial input perturbations. It is the only certification technique which scales directly to datasets of higher dimension such as ImageNet. However, current techniques are not able to utilize the fact that any adversarial example has to lie in the image space, that is $[0,1]^d$; otherwise, one can trivially detect it. To address this suboptimality, we derive new certification formulae which lead to significant improvements in the certified $\ell_1$-robustness without the need of adapting the classifiers or change of smoothing distributions. The code is released at https://github.com/vvoracek/L1-smoothing

Cite

Text

Voracek and Hein. "Improving L1-Certified Robustness via Randomized Smoothing by Leveraging Box Constraints." International Conference on Machine Learning, 2023.

Markdown

[Voracek and Hein. "Improving L1-Certified Robustness via Randomized Smoothing by Leveraging Box Constraints." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/voracek2023icml-improving/)

BibTeX

@inproceedings{voracek2023icml-improving,
  title     = {{Improving L1-Certified Robustness via Randomized Smoothing by Leveraging Box Constraints}},
  author    = {Voracek, Vaclav and Hein, Matthias},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {35198-35222},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/voracek2023icml-improving/}
}