Implicit Regularization of Discrete Gradient Dynamics in Linear Neural Networks

Abstract

When optimizing over-parameterized models, such as deep neural networks, a large set of parameters can achieve zero training error. In such cases, the choice of the optimization algorithm and its respective hyper-parameters introduces biases that will lead to convergence to specific minimizers of the objective. Consequently, this choice can be considered as an implicit regularization for the training of over-parametrized models. In this work, we push this idea further by studying the discrete gradient dynamics of the training of a two-layer linear network with the least-squares loss. Using a time rescaling, we show that, with a vanishing initialization and a small enough step size, this dynamics sequentially learns the solutions of a reduced-rank regression with a gradually increasing rank.

Cite

Text

Gidel et al. "Implicit Regularization of Discrete Gradient Dynamics in Linear Neural Networks." Neural Information Processing Systems, 2019.

Markdown

[Gidel et al. "Implicit Regularization of Discrete Gradient Dynamics in Linear Neural Networks." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/gidel2019neurips-implicit/)

BibTeX

@inproceedings{gidel2019neurips-implicit,
  title     = {{Implicit Regularization of Discrete Gradient Dynamics in Linear Neural Networks}},
  author    = {Gidel, Gauthier and Bach, Francis and Lacoste-Julien, Simon},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {3202-3211},
  url       = {https://mlanthology.org/neurips/2019/gidel2019neurips-implicit/}
}