Dynamical Mean-Field Theory for Stochastic Gradient Descent in Gaussian Mixture Classification

Abstract

We analyze in a closed form the learning dynamics of stochastic gradient descent (SGD) for a single layer neural network classifying a high-dimensional Gaussian mixture where each cluster is assigned one of two labels. This problem provides a prototype of a non-convex loss landscape with interpolating regimes and a large generalization gap. We define a particular stochastic process for which SGD can be extended to a continuous-time limit that we call stochastic gradient flow. In the full-batch limit we recover the standard gradient flow. We apply dynamical mean-field theory from statistical physics to track the dynamics of the algorithm in the high-dimensional limit via a self-consistent stochastic process. We explore the performance of the algorithm as a function of control parameters shedding light on how it navigates the loss landscape.

Cite

Text

Mignacco et al. "Dynamical Mean-Field Theory for Stochastic Gradient Descent in Gaussian Mixture Classification." Neural Information Processing Systems, 2020.

Markdown

[Mignacco et al. "Dynamical Mean-Field Theory for Stochastic Gradient Descent in Gaussian Mixture Classification." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/mignacco2020neurips-dynamical/)

BibTeX

@inproceedings{mignacco2020neurips-dynamical,
  title     = {{Dynamical Mean-Field Theory for Stochastic Gradient Descent in Gaussian Mixture Classification}},
  author    = {Mignacco, Francesca and Krzakala, Florent and Urbani, Pierfrancesco and Zdeborová, Lenka},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/mignacco2020neurips-dynamical/}
}