Sparsity-Aware Generalization Theory for Deep Neural Networks
Abstract
Deep artificial neural networks achieve surprising generalization abilities that remain poorly understood. In this paper, we present a new approach to analyzing generalization for deep feed-forward ReLU networks that takes advantage of the degree of sparsity that is achieved in the hidden layer activations. By developing a framework that accounts for this reduced effective model size for each input sample, we are able to show fundamental trade-offs between sparsity and generalization. Importantly, our results make no strong assumptions about the degree of sparsity achieved by the model, and it improves over recent norm-based approaches. We illustrate our results numerically, demonstrating non-vacuous bounds when coupled with data-dependent priors even in over-parametrized settings.
Cite
Text
Muthukumar and Sulam. "Sparsity-Aware Generalization Theory for Deep Neural Networks." Conference on Learning Theory, 2023.Markdown
[Muthukumar and Sulam. "Sparsity-Aware Generalization Theory for Deep Neural Networks." Conference on Learning Theory, 2023.](https://mlanthology.org/colt/2023/muthukumar2023colt-sparsityaware/)BibTeX
@inproceedings{muthukumar2023colt-sparsityaware,
title = {{Sparsity-Aware Generalization Theory for Deep Neural Networks}},
author = {Muthukumar, Ramchandran and Sulam, Jeremias},
booktitle = {Conference on Learning Theory},
year = {2023},
pages = {5311-5342},
volume = {195},
url = {https://mlanthology.org/colt/2023/muthukumar2023colt-sparsityaware/}
}