On the Convergence of Local Stochastic Compositional Gradient Descent with Momentum

Abstract

Federated Learning has been actively studied due to its efficiency in numerous real-world applications in the past few years. However, the federated stochastic compositional optimization problem is still underexplored, even though it has widespread applications in machine learning. In this paper, we developed a novel local stochastic compositional gradient descent with momentum method, which facilitates Federated Learning for the stochastic compositional problem. Importantly, we investigated the convergence rate of our proposed method and proved that it can achieve the $O(1/\epsilon^4)$ sample complexity, which is better than existing methods. Meanwhile, our communication complexity $O(1/\epsilon^3)$ can match existing methods. To the best of our knowledge, this is the first work achieving such favorable sample and communication complexities. Additionally, extensive experimental results demonstrate the superior empirical performance over existing methods, confirming the efficacy of our method.

Cite

Text

Gao et al. "On the Convergence of Local Stochastic Compositional Gradient Descent with Momentum." International Conference on Machine Learning, 2022.

Markdown

[Gao et al. "On the Convergence of Local Stochastic Compositional Gradient Descent with Momentum." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/gao2022icml-convergence/)

BibTeX

@inproceedings{gao2022icml-convergence,
  title     = {{On the Convergence of Local Stochastic Compositional Gradient Descent with Momentum}},
  author    = {Gao, Hongchang and Li, Junyi and Huang, Heng},
  booktitle = {International Conference on Machine Learning},
  year      = {2022},
  pages     = {7017-7035},
  volume    = {162},
  url       = {https://mlanthology.org/icml/2022/gao2022icml-convergence/}
}