Compositional Curvature Bounds for Deep Neural Networks

Abstract

A key challenge that threatens the widespread use of neural networks in safety-critical applications is their vulnerability to adversarial attacks. In this paper, we study the second-order behavior of continuously differentiable deep neural networks, focusing on robustness against adversarial perturbations. First, we provide a theoretical analysis of robustness and attack certificates for deep classifiers by leveraging local gradients and upper bounds on the second derivative (curvature constant). Next, we introduce a novel algorithm to analytically compute provable upper bounds on the second derivative of neural networks. This algorithm leverages the compositional structure of the model to propagate the curvature bound layer-by-layer, giving rise to a scalable and modular approach. The proposed bound can serve as a differentiable regularizer to control the curvature of neural networks during training, thereby enhancing robustness. Finally, we demonstrate the efficacy of our method on classification tasks using the MNIST and CIFAR-10 datasets.

Cite

Text

Entesari et al. "Compositional Curvature Bounds for Deep Neural Networks." International Conference on Machine Learning, 2024.

Markdown

[Entesari et al. "Compositional Curvature Bounds for Deep Neural Networks." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/entesari2024icml-compositional/)

BibTeX

@inproceedings{entesari2024icml-compositional,
  title     = {{Compositional Curvature Bounds for Deep Neural Networks}},
  author    = {Entesari, Taha and Sharifi, Sina and Fazlyab, Mahyar},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {12527-12546},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/entesari2024icml-compositional/}
}