Norm-Based Generalization Bounds for Sparse Neural Networks

Abstract

In this paper, we derive norm-based generalization bounds for sparse ReLU neural networks, including convolutional neural networks. These bounds differ from previous ones because they consider the sparse structure of the neural network architecture and the norms of the convolutional filters, rather than the norms of the (Toeplitz) matrices associated with the convolutional layers. Theoretically, we demonstrate that these bounds are significantly tighter than standard norm-based generalization bounds. Empirically, they offer relatively tight estimations of generalization for various simple classification problems. Collectively, these findings suggest that the sparsity of the underlying target function and the model's architecture plays a crucial role in the success of deep learning.

Cite

Text

Galanti et al. "Norm-Based Generalization Bounds for Sparse Neural Networks." Neural Information Processing Systems, 2023.

Markdown

[Galanti et al. "Norm-Based Generalization Bounds for Sparse Neural Networks." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/galanti2023neurips-normbased/)

BibTeX

@inproceedings{galanti2023neurips-normbased,
  title     = {{Norm-Based Generalization Bounds for Sparse Neural Networks}},
  author    = {Galanti, Tomer and Xu, Mengjia and Galanti, Liane and Poggio, Tomaso},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/galanti2023neurips-normbased/}
}