Communication-Efficient Distributed SGD with Sketching
Abstract
Large-scale distributed training of neural networks is often limited by network bandwidth, wherein the communication time overwhelms the local computation time. Motivated by the success of sketching methods in sub-linear/streaming algorithms, we introduce Sketched-SGD, an algorithm for carrying out distributed SGD by communicating sketches instead of full gradients. We show that \ssgd has favorable convergence rates on several classes of functions. When considering all communication -- both of gradients and of updated model weights -- Sketched-SGD reduces the amount of communication required compared to other gradient compression methods from $\mathcal{O}(d)$ or $\mathcal{O}(W)$ to $\mathcal{O}(\log d)$, where $d$ is the number of model parameters and $W$ is the number of workers participating in training. We run experiments on a transformer model, an LSTM, and a residual network, demonstrating up to a 40x reduction in total communication cost with no loss in final model performance. We also show experimentally that Sketched-SGD scales to at least 256 workers without increasing communication cost or degrading model performance.
Cite
Text
Ivkin et al. "Communication-Efficient Distributed SGD with Sketching." Neural Information Processing Systems, 2019.Markdown
[Ivkin et al. "Communication-Efficient Distributed SGD with Sketching." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/ivkin2019neurips-communicationefficient/)BibTeX
@inproceedings{ivkin2019neurips-communicationefficient,
title = {{Communication-Efficient Distributed SGD with Sketching}},
author = {Ivkin, Nikita and Rothchild, Daniel and Ullah, Enayat and Braverman, Vladimir and Stoica, Ion and Arora, Raman},
booktitle = {Neural Information Processing Systems},
year = {2019},
pages = {13144-13154},
url = {https://mlanthology.org/neurips/2019/ivkin2019neurips-communicationefficient/}
}