Almost Tight L0-Norm Certified Robustness of Top-K Predictions Against Adversarial Perturbations
Abstract
Top-$k$ predictions are used in many real-world applications such as machine learning as a service, recommender systems, and web searches. $\ell_0$-norm adversarial perturbation characterizes an attack that arbitrarily modifies some features of an input such that a classifier makes an incorrect prediction for the perturbed input. $\ell_0$-norm adversarial perturbation is easy to interpret and can be implemented in the physical world. Therefore, certifying robustness of top-$k$ predictions against $\ell_0$-norm adversarial perturbation is important. However, existing studies either focused on certifying $\ell_0$-norm robustness of top-$1$ predictions or $\ell_2$-norm robustness of top-$k$ predictions. In this work, we aim to bridge the gap. Our approach is based on randomized smoothing, which builds a provably robust classifier from an arbitrary classifier via randomizing an input. Our major theoretical contribution is an almost tight $\ell_0$-norm certified robustness guarantee for top-$k$ predictions. We empirically evaluate our method on CIFAR10 and ImageNet. For instance, our method can build a classifier that achieves a certified top-3 accuracy of 69.2\% on ImageNet when an attacker can arbitrarily perturb 5 pixels of a testing image.
Cite
Text
Jia et al. "Almost Tight L0-Norm Certified Robustness of Top-K Predictions Against Adversarial Perturbations." International Conference on Learning Representations, 2022.Markdown
[Jia et al. "Almost Tight L0-Norm Certified Robustness of Top-K Predictions Against Adversarial Perturbations." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/jia2022iclr-almost/)BibTeX
@inproceedings{jia2022iclr-almost,
title = {{Almost Tight L0-Norm Certified Robustness of Top-K Predictions Against Adversarial Perturbations}},
author = {Jia, Jinyuan and Wang, Binghui and Cao, Xiaoyu and Liu, Hongbin and Gong, Neil Zhenqiang},
booktitle = {International Conference on Learning Representations},
year = {2022},
url = {https://mlanthology.org/iclr/2022/jia2022iclr-almost/}
}