Sketching for Distributed Deep Learning: A Sharper Analysis
Abstract
The high communication cost between the server and the clients is a significant bottleneck in scaling distributed learning for overparametrized deep models. One popular approach for reducing this communication overhead is randomized sketching. However, existing theoretical analyses for sketching-based distributed learning (sketch-DL) either incur a prohibitive dependence on the ambient dimension or need additional restrictive assumptions such as heavy-hitters. Nevertheless, despite existing pessimistic analyses, empirical evidence suggests that sketch-DL is competitive with its uncompressed counterpart, thus motivating a sharper analysis. In this work, we introduce a sharper ambient dimension-independent convergence analysis for sketch-DL using the second-order geometry specified by the loss Hessian. Our results imply ambient dimension-independent communication complexity for sketch-DL. We present empirical results both on the loss Hessian and overall accuracy of sketch-DL supporting our theoretical results. Taken together, our results provide theoretical justification for the observed empirical success of sketch-DL.
Cite
Text
Shrivastava et al. "Sketching for Distributed Deep Learning: A Sharper Analysis." Neural Information Processing Systems, 2024. doi:10.52202/079017-0207Markdown
[Shrivastava et al. "Sketching for Distributed Deep Learning: A Sharper Analysis." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/shrivastava2024neurips-sketching/) doi:10.52202/079017-0207BibTeX
@inproceedings{shrivastava2024neurips-sketching,
title = {{Sketching for Distributed Deep Learning: A Sharper Analysis}},
author = {Shrivastava, Mayank and Isik, Berivan and Li, Qiaobo and Koyejo, Sanmi and Banerjee, Arindam},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-0207},
url = {https://mlanthology.org/neurips/2024/shrivastava2024neurips-sketching/}
}