On the Convergence of (Stochastic) Gradient Descent with Extrapolation for Non-Convex Minimization
Abstract
Extrapolation is a well-known technique for solving convex optimization and variational inequalities and recently attracts some attention for non-convex optimization. Several recent works have empirically shown its success in some machine learning tasks. However, it has not been analyzed for non-convex minimization and there still remains a gap between the theory and the practice. In this paper, we analyze gradient descent and stochastic gradient descent with extrapolation for finding an approximate first-order stationary point in smooth non-convex optimization problems. Our convergence upper bounds show that the algorithms with extrapolation can be accelerated than without extrapolation.
Cite
Text
Xu et al. "On the Convergence of (Stochastic) Gradient Descent with Extrapolation for Non-Convex Minimization." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/556Markdown
[Xu et al. "On the Convergence of (Stochastic) Gradient Descent with Extrapolation for Non-Convex Minimization." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/xu2019ijcai-convergence/) doi:10.24963/IJCAI.2019/556BibTeX
@inproceedings{xu2019ijcai-convergence,
title = {{On the Convergence of (Stochastic) Gradient Descent with Extrapolation for Non-Convex Minimization}},
author = {Xu, Yi and Yuan, Zhuoning and Yang, Sen and Jin, Rong and Yang, Tianbao},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2019},
pages = {4003-4009},
doi = {10.24963/IJCAI.2019/556},
url = {https://mlanthology.org/ijcai/2019/xu2019ijcai-convergence/}
}