DoCoM: Compressed Decentralized Optimization with Near-Optimal Sample Complexity

Abstract

This paper proposes the Doubly Compressed Momentum-assisted stochastic gradient tracking algorithm (DoCoM) for communication-efficient decentralized optimization. The algorithm features two main ingredients to achieve a near-optimal sample complexity while allowing for communication compression. First, the algorithm tracks both the averaged iterate and stochastic gradient using compressed gossiping consensus. Second, a momentum step is incorporated for adaptive variance reduction with the local gradient estimates. We show that DoCoM finds a near-stationary solution at all participating agents satisfying $\mathbb{E}[ \| \nabla f( \theta ) \|^2 ] = {\cal O}( 1 / T^{2/3} )$ in $T$ iterations, where $f(\theta)$ is a smooth (possibly non-convex) objective function. Notice that the proof is achieved via analytically designing a new potential function that tightly tracks the one-iteration progress of DoCoM. As a corollary, our analysis also established the linear convergence of DoCoM to a global optimal solution for objective functions with the Polyak-Łojasiewicz condition. Numerical experiments demonstrate that our algorithm outperforms several state-of-the-art algorithms in practice.

Cite

Text

Yau and Wai. "DoCoM: Compressed Decentralized Optimization with Near-Optimal Sample Complexity." Transactions on Machine Learning Research, 2023.

Markdown

[Yau and Wai. "DoCoM: Compressed Decentralized Optimization with Near-Optimal Sample Complexity." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/yau2023tmlr-docom/)

BibTeX

@article{yau2023tmlr-docom,
  title     = {{DoCoM: Compressed Decentralized Optimization with Near-Optimal Sample Complexity}},
  author    = {Yau, Chung-Yiu and Wai, Hoi To},
  journal   = {Transactions on Machine Learning Research},
  year      = {2023},
  url       = {https://mlanthology.org/tmlr/2023/yau2023tmlr-docom/}
}