Minibatch vs Local SGD for Heterogeneous Distributed Learning

Abstract

We analyze Local SGD (aka parallel or federated SGD) and Minibatch SGD in the heterogeneous distributed setting, where each machine has access to stochastic gradient estimates for a different, machine-specific, convex objective; the goal is to optimize w.r.t.~the average objective; and machines can only communicate intermittently. We argue that, (i) Minibatch SGD (even without acceleration) dominates all existing analysis of Local SGD in this setting, (ii) accelerated Minibatch SGD is optimal when the heterogeneity is high, and (iii) present the first upper bound for Local SGD that improves over Minibatch SGD in a non-homogeneous regime.

Cite

Text

Woodworth et al. "Minibatch vs Local SGD for Heterogeneous Distributed Learning." Neural Information Processing Systems, 2020.

Markdown

[Woodworth et al. "Minibatch vs Local SGD for Heterogeneous Distributed Learning." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/woodworth2020neurips-minibatch/)

BibTeX

@inproceedings{woodworth2020neurips-minibatch,
  title     = {{Minibatch vs Local SGD for Heterogeneous Distributed Learning}},
  author    = {Woodworth, Blake E and Patel, Kumar Kshitij and Srebro, Nati},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/woodworth2020neurips-minibatch/}
}