A Generalization Bound for Nearly-Linear Networks
Abstract
We consider nonlinear networks as perturbations of linear ones. Based on this approach, we present a novel generalization bound that become non-vacuous for networks that are close to being linear. The main advantage over the previous works which propose non-vacuous generalization bounds is that our bound is *a priori*: performing the actual training is not required for evaluating the bound. To the best of our knowledge, it is the first non-vacuous generalization bound for neural nets possessing this property.
Cite
Text
Golikov. "A Generalization Bound for Nearly-Linear Networks." Transactions on Machine Learning Research, 2025.Markdown
[Golikov. "A Generalization Bound for Nearly-Linear Networks." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/golikov2025tmlr-generalization/)BibTeX
@article{golikov2025tmlr-generalization,
title = {{A Generalization Bound for Nearly-Linear Networks}},
author = {Golikov, Eugene},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/golikov2025tmlr-generalization/}
}