Compressed Decentralized Momentum Stochastic Gradient Methods for Nonconvex Optimization
Abstract
In this paper, we design two compressed decentralized algorithms for solving nonconvex stochastic optimization under two different scenarios. Both algorithms adopt a momentum technique to achieve fast convergence and a message-compression technique to save communication costs. Though momentum acceleration and compressed communication have been used in literature, it is highly nontrivial to theoretically prove the effectiveness of their composition in a decentralized algorithm that can maintain the benefits of both sides, because of the need to simultaneously control the consensus error, the compression error, and the bias from the momentum gradient. For the scenario where gradients are bounded, our proposal is a compressed decentralized adaptive method. To the best of our knowledge, this is the first decentralized adaptive stochastic gradient method with compressed communication. For the scenario of data heterogeneity without bounded gradients, our proposal is a compressed decentralized heavy-ball method, which applies a gradient tracking technique to address the challenge of data heterogeneity. Notably, both methods achieve an optimal convergence rate, and they can achieve linear speed up and adopt topology-independent algorithmic parameters within a certain regime of the user-specified error tolerance. Superior empirical performance is observed over state-of-the-art methods on training deep neural networks (DNNs) and Transformers.
Cite
Text
Liu et al. "Compressed Decentralized Momentum Stochastic Gradient Methods for Nonconvex Optimization." Transactions on Machine Learning Research, 2025.Markdown
[Liu et al. "Compressed Decentralized Momentum Stochastic Gradient Methods for Nonconvex Optimization." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/liu2025tmlr-compressed/)BibTeX
@article{liu2025tmlr-compressed,
title = {{Compressed Decentralized Momentum Stochastic Gradient Methods for Nonconvex Optimization}},
author = {Liu, Wei and Panda, Anweshit and Pandey, Ujwal and Brissette, Christopher and Shen, Yikang and Slota, George and Wang, Naigang and Chen, Jie and Xu, Yangyang},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/liu2025tmlr-compressed/}
}