Efficient Decentralized Stochastic Gradient Descent Method for Nonconvex Finite-Sum Optimization Problems

Abstract

Decentralized stochastic gradient descent methods have attracted increasing interest in recent years. Numerous methods have been proposed for the nonconvex finite-sum optimization problem. However, existing methods have a large sample complexity, slowing down the empirical convergence speed. To address this issue, in this paper, we proposed a novel decentralized stochastic gradient descent method for the nonconvex finite-sum optimization problem, which enjoys a better sample and communication complexity than existing methods. To the best of our knowledge, our work is the first one achieving such favorable sample and communication complexities. Finally, we have conducted extensive experiments and the experimental results have confirmed the superior performance of our proposed method.

Cite

Text

Zhan et al. "Efficient Decentralized Stochastic Gradient Descent Method for Nonconvex Finite-Sum Optimization Problems." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I8.20884

Markdown

[Zhan et al. "Efficient Decentralized Stochastic Gradient Descent Method for Nonconvex Finite-Sum Optimization Problems." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/zhan2022aaai-efficient/) doi:10.1609/AAAI.V36I8.20884

BibTeX

@inproceedings{zhan2022aaai-efficient,
  title     = {{Efficient Decentralized Stochastic Gradient Descent Method for Nonconvex Finite-Sum Optimization Problems}},
  author    = {Zhan, Wenkang and Wu, Gang and Gao, Hongchang},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {9006-9013},
  doi       = {10.1609/AAAI.V36I8.20884},
  url       = {https://mlanthology.org/aaai/2022/zhan2022aaai-efficient/}
}