Enhancing Parallelism in Decentralized Stochastic Convex Optimization
Abstract
Decentralized learning has emerged as a powerful approach for handling large datasets across multiple machines in a communication-efficient manner. However, such methods often face scalability limitations, as increasing the number of machines beyond a certain point negatively impacts convergence rates. In this work, we propose Decentralized Anytime SGD, a novel decentralized learning algorithm that significantly extends the critical parallelism threshold, enabling the effective use of more machines without compromising performance. Within the stochastic convex optimization (SCO) framework, we establish a theoretical upper bound on parallelism that surpasses the current state-of-the-art, allowing larger networks to achieve favorable statistical guarantees and closing the gap with centralized learning in highly connected topologies.
Cite
Text
Eisen et al. "Enhancing Parallelism in Decentralized Stochastic Convex Optimization." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Eisen et al. "Enhancing Parallelism in Decentralized Stochastic Convex Optimization." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/eisen2025icml-enhancing/)BibTeX
@inproceedings{eisen2025icml-enhancing,
title = {{Enhancing Parallelism in Decentralized Stochastic Convex Optimization}},
author = {Eisen, Ofri and Dorfman, Ron and Levy, Kfir Yehuda},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {15105-15129},
volume = {267},
url = {https://mlanthology.org/icml/2025/eisen2025icml-enhancing/}
}