The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication
Abstract
We resolve the min-max complexity of distributed stochastic convex optimization (up to a log factor) in the intermittent communication setting, where $M$ machines work in parallel over the course of $R$ rounds of communication to optimize the objective, and during each round of communication, each machine may sequentially compute $K$ stochastic gradient estimates. We present a novel lower bound with a matching upper bound that establishes an optimal algorithm.
Cite
Text
Woodworth et al. "The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication." Conference on Learning Theory, 2021.Markdown
[Woodworth et al. "The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication." Conference on Learning Theory, 2021.](https://mlanthology.org/colt/2021/woodworth2021colt-minmax/)BibTeX
@inproceedings{woodworth2021colt-minmax,
title = {{The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication}},
author = {Woodworth, Blake E and Bullins, Brian and Shamir, Ohad and Srebro, Nathan},
booktitle = {Conference on Learning Theory},
year = {2021},
pages = {4386-4437},
volume = {134},
url = {https://mlanthology.org/colt/2021/woodworth2021colt-minmax/}
}