Noisy Interpolation Learning with Shallow Univariate ReLU Networks

Abstract

Understanding how overparameterized neural networks generalize despite perfect interpolation of noisy training data is a fundamental question. Mallinar et. al. (2022) noted that neural networks seem to often exhibit ``tempered overfitting'', wherein the population risk does not converge to the Bayes optimal error, but neither does it approach infinity, yielding non-trivial generalization. However, this has not been studied rigorously. We provide the first rigorous analysis of the overfiting behaviour of regression with minimum norm ($\ell_2$ of weights), focusing on univariate two-layer ReLU networks. We show overfitting is tempered (with high probability) when measured with respect to the $L_1$ loss, but also show that the situation is more complex than suggested by Mallinar et. al., and overfitting is catastrophic with respect to the $L_2$ loss, or when taking an expectation over the training set.

Cite

Text

Joshi et al. "Noisy Interpolation Learning with Shallow Univariate ReLU Networks." International Conference on Learning Representations, 2024.

Markdown

[Joshi et al. "Noisy Interpolation Learning with Shallow Univariate ReLU Networks." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/joshi2024iclr-noisy/)

BibTeX

@inproceedings{joshi2024iclr-noisy,
  title     = {{Noisy Interpolation Learning with Shallow Univariate ReLU Networks}},
  author    = {Joshi, Nirmit and Vardi, Gal and Srebro, Nathan},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/joshi2024iclr-noisy/}
}