Fundamental Limits on the Robustness of Image Classifiers
Abstract
We prove that image classifiers are fundamentally sensitive to small perturbations in their inputs. Specifically, we show that given some image space of $n$-by-$n$ images, all but a tiny fraction of images in any image class induced over that space can be moved outside that class by adding some perturbation whose $p$-norm is $O(n^{1/\max{(p,1)}})$, as long as that image class takes up at most half of the image space. We then show that $O(n^{1/\max{(p,1)}})$ is asymptotically optimal. Finally, we show that an increase in the bit depth of the image space leads to a loss in robustness. We supplement our results with a discussion of their implications for vision systems.
Cite
Text
Dai and Gifford. "Fundamental Limits on the Robustness of Image Classifiers." International Conference on Learning Representations, 2023.Markdown
[Dai and Gifford. "Fundamental Limits on the Robustness of Image Classifiers." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/dai2023iclr-fundamental/)BibTeX
@inproceedings{dai2023iclr-fundamental,
title = {{Fundamental Limits on the Robustness of Image Classifiers}},
author = {Dai, Zheng and Gifford, David},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/dai2023iclr-fundamental/}
}