The Dynamics of Gradient Descent for Overparametrized Neural Networks
Abstract
We consider the dynamics of gradient descent (GD) in overparameterized single hidden layer neural networks with a squared loss function. Recently, it has been shown that, under some conditions, the parameter values obtained using GD achieve zero training error and generalize well if the initial conditions are chosen appropriately. Here, through a Lyapunov analysis, we show that the dynamics of neural network weights under GD converge to a point which is close to the minimum norm solution subject to the condition that there is no training error when using the linear approximation to the neural network.
Cite
Text
Satpathi and Srikant. "The Dynamics of Gradient Descent for Overparametrized Neural Networks." Proceedings of the 3rd Conference on Learning for Dynamics and Control, 2021.Markdown
[Satpathi and Srikant. "The Dynamics of Gradient Descent for Overparametrized Neural Networks." Proceedings of the 3rd Conference on Learning for Dynamics and Control, 2021.](https://mlanthology.org/l4dc/2021/satpathi2021l4dc-dynamics/)BibTeX
@inproceedings{satpathi2021l4dc-dynamics,
title = {{The Dynamics of Gradient Descent for Overparametrized Neural Networks}},
author = {Satpathi, Siddhartha and Srikant, R},
booktitle = {Proceedings of the 3rd Conference on Learning for Dynamics and Control},
year = {2021},
pages = {373-384},
volume = {144},
url = {https://mlanthology.org/l4dc/2021/satpathi2021l4dc-dynamics/}
}