Heavy-Ball Algorithms Always Escape Saddle Points
Abstract
Nonconvex optimization algorithms with random initialization have attracted increasing attention recently. It has been showed that many first-order methods always avoid saddle points with random starting points. In this paper, we answer a question: can the nonconvex heavy-ball algorithms with random initialization avoid saddle points? The answer is yes! Direct using the existing proof technique for the heavy-ball algorithms is hard due to that each iteration of the heavy-ball algorithm consists of current and last points. It is impossible to formulate the algorithms as iteration like xk+1= g(xk) under some mapping g. To this end, we design a new mapping on a new space. With some transfers, the heavy-ball algorithm can be interpreted as iterations after this mapping. Theoretically, we prove that heavy-ball gradient descent enjoys larger stepsize than the gradient descent to escape saddle points to escape the saddle point. And the heavy-ball proximal point algorithm is also considered; we also proved that the algorithm can always escape the saddle point.
Cite
Text
Sun et al. "Heavy-Ball Algorithms Always Escape Saddle Points." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/488Markdown
[Sun et al. "Heavy-Ball Algorithms Always Escape Saddle Points." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/sun2019ijcai-heavy/) doi:10.24963/IJCAI.2019/488BibTeX
@inproceedings{sun2019ijcai-heavy,
title = {{Heavy-Ball Algorithms Always Escape Saddle Points}},
author = {Sun, Tao and Li, Dongsheng and Quan, Zhe and Jiang, Hao and Li, Shengguo and Dou, Yong},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2019},
pages = {3520-3526},
doi = {10.24963/IJCAI.2019/488},
url = {https://mlanthology.org/ijcai/2019/sun2019ijcai-heavy/}
}