Randomized Smoothing (almost) in Real Time?
Abstract
Certifying the robustness of Deep Neural Networks (DNNs) is very important in safety-critical domains. Randomized Smoothing (RS) has been recently proposed as a scalable, model-agnostic method for robustness verification, which has achieved excellent results and has been extended for a large variety of adversarial perturbation scenarios. However, a hidden cost in RS is during interference, since it requires passing \emph{tens-of-thousands} perturbed samples through the DNN in order to perform the verification. In this work, we try to address this challenge, and explore what it would take to perform RS much faster, perhaps even in real-time, and what happens as we decrease the number of samples by orders of magnitude. Surprisingly, we find that \emph{the performance reduction in terms of average certified radius is not too large, even if we decrease the number of samples by two orders of magnitude, or more}. This could possibly pave the way even for real-time robustness certification, under suitable settings. We perform a detailed analysis, both theoretically and experimentally, and show promising results on the standard CIFAR-10 and ImageNet datasets.
Cite
Text
Seferis et al. "Randomized Smoothing (almost) in Real Time?." ICML 2023 Workshops: MFPL, 2023.Markdown
[Seferis et al. "Randomized Smoothing (almost) in Real Time?." ICML 2023 Workshops: MFPL, 2023.](https://mlanthology.org/icmlw/2023/seferis2023icmlw-randomized/)BibTeX
@inproceedings{seferis2023icmlw-randomized,
title = {{Randomized Smoothing (almost) in Real Time?}},
author = {Seferis, Emmanouil and Burton, Simon and Kollias, Stefanos},
booktitle = {ICML 2023 Workshops: MFPL},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/seferis2023icmlw-randomized/}
}