Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solution for Nonconvex Distributed Optimization over Networks

Abstract

In this work, we study two first-order primal-dual based algorithms, the Gradient Primal-Dual Algorithm (GPDA) and the Gradient Alternating Direction Method of Multipliers (GADMM), for solving a class of linearly constrained non-convex optimization problems. We show that with random initialization of the primal and dual variables, both algorithms are able to compute second-order stationary solutions (ss2) with probability one. This is the first result showing that primal-dual algorithm is capable of finding ss2 when only using first-order information; it also extends the existing results for first-order, but primal-only algorithms. An important implication of our result is that it also gives rise to the first global convergence result to the ss2, for two classes of unconstrained distributed non-convex learning problems over multi-agent networks.

Cite

Text

Hong et al. "Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solution for Nonconvex Distributed Optimization over Networks." International Conference on Machine Learning, 2018.

Markdown

[Hong et al. "Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solution for Nonconvex Distributed Optimization over Networks." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/hong2018icml-gradient/)

BibTeX

@inproceedings{hong2018icml-gradient,
  title     = {{Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solution for Nonconvex Distributed Optimization over Networks}},
  author    = {Hong, Mingyi and Razaviyayn, Meisam and Lee, Jason},
  booktitle = {International Conference on Machine Learning},
  year      = {2018},
  pages     = {2009-2018},
  volume    = {80},
  url       = {https://mlanthology.org/icml/2018/hong2018icml-gradient/}
}