An Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization
Abstract
We propose a distributed first-order augmented Lagrangian (DFAL) algorithm to minimize the sum of composite convex functions, where each term in the sum is a private cost function belonging to a node, and only nodes connected by an edge can directly communicate with each other. This optimization model abstracts a number of applications in distributed sensing and machine learning. We show that any limit point of DFAL iterates is optimal; and for any eps > 0, an eps-optimal and eps-feasible solution can be computed within O(log(1/eps)) DFAL iterations, which require O(\psi_\textmax^1.5/d_\textmin ⋅1/ε) proximal gradient computations and communications per node in total, where \psi_\textmax denotes the largest eigenvalue of the graph Laplacian, and d_\textmin is the minimum degree of the graph. We also propose an asynchronous version of DFAL by incorporating randomized block coordinate descent methods; and demonstrate the efficiency of DFAL on large scale sparse-group LASSO problems.
Cite
Text
Aybat et al. "An Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization." International Conference on Machine Learning, 2015.Markdown
[Aybat et al. "An Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization." International Conference on Machine Learning, 2015.](https://mlanthology.org/icml/2015/aybat2015icml-asynchronous/)BibTeX
@inproceedings{aybat2015icml-asynchronous,
title = {{An Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization}},
author = {Aybat, Necdet and Wang, Zi and Iyengar, Garud},
booktitle = {International Conference on Machine Learning},
year = {2015},
pages = {2454-2462},
volume = {37},
url = {https://mlanthology.org/icml/2015/aybat2015icml-asynchronous/}
}