Gradient Dynamics of Shallow Univariate ReLU Networks

Abstract

We present a theoretical and empirical study of the gradient dynamics of overparameterized shallow ReLU networks with one-dimensional input, solving least-squares interpolation. We show that the gradient dynamics of such networks are determined by the gradient flow in a non-redundant parameterization of the network function. We examine the principal qualitative features of this gradient flow. In particular, we determine conditions for two learning regimes: \emph{kernel} and \emph{adaptive}, which depend both on the relative magnitude of initialization of weights in different layers and the asymptotic behavior of initialization coefficients in the limit of large network widths. We show that learning in the kernel regime yields smooth interpolants, minimizing curvature, and reduces to \emph{cubic splines} for uniform initializations. Learning in the adaptive regime favors instead \emph{linear splines}, where knots cluster adaptively at the sample points.

Cite

Text

Williams et al. "Gradient Dynamics of Shallow Univariate ReLU Networks." Neural Information Processing Systems, 2019.

Markdown

[Williams et al. "Gradient Dynamics of Shallow Univariate ReLU Networks." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/williams2019neurips-gradient/)

BibTeX

@inproceedings{williams2019neurips-gradient,
  title     = {{Gradient Dynamics of Shallow Univariate ReLU Networks}},
  author    = {Williams, Francis and Trager, Matthew and Panozzo, Daniele and Silva, Claudio and Zorin, Denis and Bruna, Joan},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {8378-8387},
  url       = {https://mlanthology.org/neurips/2019/williams2019neurips-gradient/}
}