Linear Mode Connectivity and the Lottery Ticket Hypothesis

Abstract

We study whether a neural network optimizes to the same, linearly connected minimum under different samples of SGD noise (e.g., random data order and augmentation). We find that standard vision models become stable to SGD noise in this way early in training. From then on, the outcome of optimization is determined to a linearly connected region. We use this technique to study iterative magnitude pruning (IMP), the procedure used by work on the lottery ticket hypothesis to identify subnetworks that could have trained in isolation to full accuracy. We find that these subnetworks only reach full accuracy when they are stable to SGD noise, which either occurs at initialization for small-scale settings (MNIST) or early in training for large-scale settings (ResNet-50 and Inception-v3 on ImageNet).

Cite

Text

Frankle et al. "Linear Mode Connectivity and the Lottery Ticket Hypothesis." International Conference on Machine Learning, 2020.

Markdown

[Frankle et al. "Linear Mode Connectivity and the Lottery Ticket Hypothesis." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/frankle2020icml-linear/)

BibTeX

@inproceedings{frankle2020icml-linear,
  title     = {{Linear Mode Connectivity and the Lottery Ticket Hypothesis}},
  author    = {Frankle, Jonathan and Dziugaite, Gintare Karolina and Roy, Daniel and Carbin, Michael},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {3259-3269},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/frankle2020icml-linear/}
}