Decentralized Accelerated Proximal Gradient Descent
Abstract
Decentralized optimization has wide applications in machine learning, signal processing, and control. In this paper, we study the decentralized composite optimization problem with a non-smooth regularization term. Many proximal gradient based decentralized algorithms have been proposed in the past. However, these algorithms do not achieve near optimal computational complexity and communication complexity. In this paper, we propose a new method which establishes the optimal computational complexity and a near optimal communication complexity. Our empirical study shows that the proposed algorithm outperforms existing state-of-the-art algorithms.
Cite
Text
Ye et al. "Decentralized Accelerated Proximal Gradient Descent." Neural Information Processing Systems, 2020.Markdown
[Ye et al. "Decentralized Accelerated Proximal Gradient Descent." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/ye2020neurips-decentralized/)BibTeX
@inproceedings{ye2020neurips-decentralized,
title = {{Decentralized Accelerated Proximal Gradient Descent}},
author = {Ye, Haishan and Zhou, Ziang and Luo, Luo and Zhang, Tong},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/ye2020neurips-decentralized/}
}