Decentralized Deep Learning with Arbitrary Communication Compression

Abstract

Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks, as well as for efficient scaling to large compute clusters. As current approaches are limited by network bandwidth, we propose the use of communication compression in the decentralized training context. We show that Choco-SGD achieves linear speedup in the number of workers for arbitrary high compression ratios on general non-convex functions, and non-IID training data. We demonstrate the practical performance of the algorithm in two key scenarios: the training of deep learning models (i) over decentralized user devices, connected by a peer-to-peer network and (ii) in a datacenter.

Cite

Text

Koloskova et al. "Decentralized Deep Learning with Arbitrary Communication Compression." International Conference on Learning Representations, 2020.

Markdown

[Koloskova et al. "Decentralized Deep Learning with Arbitrary Communication Compression." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/koloskova2020iclr-decentralized/)

BibTeX

@inproceedings{koloskova2020iclr-decentralized,
  title     = {{Decentralized Deep Learning with Arbitrary Communication Compression}},
  author    = {Koloskova, Anastasia and Lin, Tao and Stich, Sebastian U. and Jaggi, Martin},
  booktitle = {International Conference on Learning Representations},
  year      = {2020},
  url       = {https://mlanthology.org/iclr/2020/koloskova2020iclr-decentralized/}
}