Convergence of Adaptive Algorithms for Constrained Weakly Convex Optimization

Abstract

We analyze the adaptive first order algorithm AMSGrad, for solving a constrained stochastic optimization problem with a weakly convex objective. We prove the $\mathcal{\tilde O}(t^{-1/2})$ rate of convergence for the squared norm of the gradient of Moreau envelope, which is the standard stationarity measure for this class of problems. It matches the known rates that adaptive algorithms enjoy for the specific case of unconstrained smooth nonconvex stochastic optimization. Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly unbounded optimization domains. Finally, we illustrate the applications and extensions of our results to specific problems and algorithms.

Cite

Text

Alacaoglu et al. "Convergence of Adaptive Algorithms for Constrained Weakly Convex Optimization." Neural Information Processing Systems, 2021.

Markdown

[Alacaoglu et al. "Convergence of Adaptive Algorithms for Constrained Weakly Convex Optimization." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/alacaoglu2021neurips-convergence/)

BibTeX

@inproceedings{alacaoglu2021neurips-convergence,
  title     = {{Convergence of Adaptive Algorithms for Constrained Weakly Convex Optimization}},
  author    = {Alacaoglu, Ahmet and Malitsky, Yura and Cevher, Volkan},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/alacaoglu2021neurips-convergence/}
}