Revisiting "Qualitatively Characterizing Neural Network Optimization Problems"

Abstract

We revisit and extend the experiments of Goodfellow et al. (2015), who showed that - for then state-of-the-art networks - "the objective function has a simple, approximately convex shape" along the linear path between initialization and the trained weights. We do not find this to be the case for modern networks on CIFAR-10 and ImageNet. Instead, although loss is roughly monotonically non-increasing along this path, it remains high until close to the optimum. In addition, training quickly becomes linearly separated from the optimum by loss barriers. We conclude that, although Goodfellow et al.'s findings describe the "relatively easy to optimize" MNIST setting, behavior is qualitatively different in modern settings.

Cite

Text

Frankle. "Revisiting "Qualitatively Characterizing Neural Network Optimization Problems"." NeurIPS 2020 Workshops: DL-IG, 2020.

Markdown

[Frankle. "Revisiting "Qualitatively Characterizing Neural Network Optimization Problems"." NeurIPS 2020 Workshops: DL-IG, 2020.](https://mlanthology.org/neuripsw/2020/frankle2020neuripsw-revisiting/)

BibTeX

@inproceedings{frankle2020neuripsw-revisiting,
  title     = {{Revisiting "Qualitatively Characterizing Neural Network Optimization Problems"}},
  author    = {Frankle, Jonathan},
  booktitle = {NeurIPS 2020 Workshops: DL-IG},
  year      = {2020},
  url       = {https://mlanthology.org/neuripsw/2020/frankle2020neuripsw-revisiting/}
}