AdaDelay: Delay Adaptive Distributed Stochastic Optimization

Abstract

We develop distributed stochastic convex optimization algorithms under a delayed gradient model in which server nodes update parameters and worker nodes compute stochastic (sub)gradients. Our setup is motivated by the behavior of real-world distributed computation systems; in particular, we analyze a setting wherein worker nodes can be differently slow at different times. In contrast to existing approaches, we do not impose a worst-case bound on the delays experienced but rather allow the updates to be sensitive to the actual delays experienced. This sensitivity allows use of larger stepsizes, which can help speed up initial convergence without having to wait too long for slower machines; the global convergence rate is still preserved. We experiment with different delay patterns, and obtain noticeable improvements for large-scale real datasets with billions of examples and features.

Cite

Text

Sra et al. "AdaDelay: Delay Adaptive Distributed Stochastic Optimization." International Conference on Artificial Intelligence and Statistics, 2016.

Markdown

[Sra et al. "AdaDelay: Delay Adaptive Distributed Stochastic Optimization." International Conference on Artificial Intelligence and Statistics, 2016.](https://mlanthology.org/aistats/2016/sra2016aistats-adadelay/)

BibTeX

@inproceedings{sra2016aistats-adadelay,
  title     = {{AdaDelay: Delay Adaptive Distributed Stochastic Optimization}},
  author    = {Sra, Suvrit and Yu, Adams Wei and Li, Mu and Smola, Alexander J.},
  booktitle = {International Conference on Artificial Intelligence and Statistics},
  year      = {2016},
  pages     = {957-965},
  url       = {https://mlanthology.org/aistats/2016/sra2016aistats-adadelay/}
}