LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning

Abstract

This paper presents a new class of gradient methods for distributed machine learning that adaptively skip the gradient calculations to learn with reduced communication and computation. Simple rules are designed to detect slowly-varying gradients and, therefore, trigger the reuse of outdated gradients. The resultant gradient-based algorithms are termed Lazily Aggregated Gradient --- justifying our acronym LAG used henceforth. Theoretically, the merits of this contribution are: i) the convergence rate is the same as batch gradient descent in strongly-convex, convex, and nonconvex cases; and, ii) if the distributed datasets are heterogeneous (quantified by certain measurable constants), the communication rounds needed to achieve a targeted accuracy are reduced thanks to the adaptive reuse of lagged gradients. Numerical experiments on both synthetic and real data corroborate a significant communication reduction compared to alternatives.

Cite

Text

Chen et al. "LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning." Neural Information Processing Systems, 2018.

Markdown

[Chen et al. "LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/chen2018neurips-lag/)

BibTeX

@inproceedings{chen2018neurips-lag,
  title     = {{LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning}},
  author    = {Chen, Tianyi and Giannakis, Georgios and Sun, Tao and Yin, Wotao},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {5050-5060},
  url       = {https://mlanthology.org/neurips/2018/chen2018neurips-lag/}
}