The Fundamental Limits of Neural Networks for Interval Certified Robustness

Abstract

Interval analysis (or interval bound propagation, IBP) is a popular technique for verifying and training provably robust deep neural networks, a fundamental challenge in the area of reliable machine learning. However, despite substantial efforts, progress on addressing this key challenge has stagnated, calling into question whether interval analysis is a viable path forward. In this paper we present a fundamental result on the limitation of neural networks for interval analyzable robust classification. Our main theorem shows that non-invertible functions can not be built such that interval analysis is precise everywhere. Given this, we derive a paradox: while every dataset can be robustly classified, there are simple datasets that can not be provably robustly classified with interval analysis.

Cite

Text

Mirman et al. "The Fundamental Limits of Neural Networks for Interval Certified Robustness." Transactions on Machine Learning Research, 2022.

Markdown

[Mirman et al. "The Fundamental Limits of Neural Networks for Interval Certified Robustness." Transactions on Machine Learning Research, 2022.](https://mlanthology.org/tmlr/2022/mirman2022tmlr-fundamental/)

BibTeX

@article{mirman2022tmlr-fundamental,
  title     = {{The Fundamental Limits of Neural Networks for Interval Certified Robustness}},
  author    = {Mirman, Matthew B and Baader, Maximilian and Vechev, Martin},
  journal   = {Transactions on Machine Learning Research},
  year      = {2022},
  url       = {https://mlanthology.org/tmlr/2022/mirman2022tmlr-fundamental/}
}