Achieving Linear Speedup and Near-Optimal Complexity for Decentralized Optimization over Row-Stochastic Networks
Abstract
A key challenge in decentralized optimization is determining the optimal convergence rate and designing algorithms to achieve it. While this problem has been extensively addressed for doubly-stochastic and column-stochastic mixing matrices, the row-stochastic scenario remains unexplored. This paper bridges this gap by introducing effective metrics to capture the influence of row-stochastic mixing matrices and establishing the first convergence lower bound for decentralized learning over row-stochastic networks. However, existing algorithms fail to attain this lower bound due to two key issues: deviation in the descent direction caused by the adapted gradient tracking (GT) and instability introduced by the Pull-Diag protocol. To address descent deviation, we propose a novel analysis framework demonstrating that Pull-Diag-GT achieves linear speedup—the first such result for row-stochastic decentralized optimization. Moreover, by incorporating a multi-step gossip (MG) protocol, we resolve the instability issue and attain the lower bound, achieving near-optimal complexity for decentralized optimization over row-stochastic networks.
Cite
Text
Liang et al. "Achieving Linear Speedup and Near-Optimal Complexity for Decentralized Optimization over Row-Stochastic Networks." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Liang et al. "Achieving Linear Speedup and Near-Optimal Complexity for Decentralized Optimization over Row-Stochastic Networks." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/liang2025icml-achieving/)BibTeX
@inproceedings{liang2025icml-achieving,
title = {{Achieving Linear Speedup and Near-Optimal Complexity for Decentralized Optimization over Row-Stochastic Networks}},
author = {Liang, Liyuan and Chen, Xinyi and Luo, Gan and Yuan, Kun},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {37100-37130},
volume = {267},
url = {https://mlanthology.org/icml/2025/liang2025icml-achieving/}
}