Gradient Flow Dynamics of Shallow ReLU Networks for Square Loss and Orthogonal Inputs

Abstract

The training of neural networks by gradient descent methods is a cornerstone of the deep learning revolution. Yet, despite some recent progress, a complete theory explaining its success is still missing. This article presents, for orthogonal input vectors, a precise description of the gradient flow dynamics of training one-hidden layer ReLU neural networks for the mean squared error at small initialisation. In this setting, despite non-convexity, we show that the gradient flow converges to zero loss and characterise its implicit bias towards minimum variation norm. Furthermore, some interesting phenomena are highlighted: a quantitative description of the initial alignment phenomenon and a proof that the process follows a specific saddle to saddle dynamics.

Cite

Text

Boursier et al. "Gradient Flow Dynamics of Shallow ReLU Networks for Square Loss and Orthogonal Inputs." Neural Information Processing Systems, 2022.

Markdown

[Boursier et al. "Gradient Flow Dynamics of Shallow ReLU Networks for Square Loss and Orthogonal Inputs." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/boursier2022neurips-gradient/)

BibTeX

@inproceedings{boursier2022neurips-gradient,
  title     = {{Gradient Flow Dynamics of Shallow ReLU Networks for Square Loss and Orthogonal Inputs}},
  author    = {Boursier, Etienne and Pillaud-Vivien, Loucas and Flammarion, Nicolas},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/boursier2022neurips-gradient/}
}