IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method
Abstract
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex. Our approach consists of approximately solving a sequence of sub-problems induced by the accelerated augmented Lagrangian method, thereby providing a systematic way for deriving several well-known decentralized algorithms including EXTRA and SSDA. When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds. We provide experimental results that demonstrate the effectiveness of the proposed algorithm on highly ill-conditioned problems.
Cite
Text
Arjevani et al. "IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method." Neural Information Processing Systems, 2020.Markdown
[Arjevani et al. "IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/arjevani2020neurips-ideal/)BibTeX
@inproceedings{arjevani2020neurips-ideal,
title = {{IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method}},
author = {Arjevani, Yossi and Bruna, Joan and Can, Bugra and Gurbuzbalaban, Mert and Jegelka, Stefanie and Lin, Hongzhou},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/arjevani2020neurips-ideal/}
}