Generalization of Two-Layer Neural Networks: An Asymptotic Viewpoint

Abstract

This paper investigates the generalization properties of two-layer neural networks in high-dimensions, i.e. when the number of samples $n$, features $d$, and neurons $h$ tend to infinity at the same rate. Specifically, we derive the exact population risk of the unregularized least squares regression problem with two-layer neural networks when either the first or the second layer is trained using a gradient flow under different initialization setups. When only the second layer coefficients are optimized, we recover the \textit{double descent} phenomenon: a cusp in the population risk appears at $h\approx n$ and further overparameterization decreases the risk. In contrast, when the first layer weights are optimized, we highlight how different scales of initialization lead to different inductive bias, and show that the resulting risk is \textit{independent} of overparameterization. Our theoretical and experimental results suggest that previously studied model setups that provably give rise to \textit{double descent} might not translate to optimizing two-layer neural networks.

Cite

Text

Ba et al. "Generalization of Two-Layer Neural Networks: An Asymptotic Viewpoint." International Conference on Learning Representations, 2020.

Markdown

[Ba et al. "Generalization of Two-Layer Neural Networks: An Asymptotic Viewpoint." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/ba2020iclr-generalization/)

BibTeX

@inproceedings{ba2020iclr-generalization,
  title     = {{Generalization of Two-Layer Neural Networks: An Asymptotic Viewpoint}},
  author    = {Ba, Jimmy and Erdogdu, Murat and Suzuki, Taiji and Wu, Denny and Zhang, Tianzong},
  booktitle = {International Conference on Learning Representations},
  year      = {2020},
  url       = {https://mlanthology.org/iclr/2020/ba2020iclr-generalization/}
}