Accelerated Factored Gradient Descent for Low-Rank Matrix Factorization
Abstract
We study the low-rank matrix estimation problem, where the objective function $\mathcal{L}(\Mb)$ is defined over the space of positive semidefinite matrices with rank less than or equal to $r$. A fast approach to solve this problem is matrix factorization, which reparameterizes $\mathbf{M}$ as the product of two smaller matrix such that $\mathbf{M} =\mathbf{U}\mathbf{U}^\top$ and then performs gradient descent on $\mathbf{U}$ directly, a.k.a., factored gradient descent. Since the resulting problem is nonconvex, whether Nesterov’s acceleration scheme can be adapted to it remains a long-standing question. In this paper, we answer this question affirmatively by proposing a novel and practical accelerated factored gradient descent method motivated by Nesterov’s accelerated gradient descent. The proposed method enjoys better iteration complexity and computational complexity than the state-of-the-art algorithms in a wide regime. The key idea of our algorithm is to restrict all its iterates onto a special convex set, which enables the acceleration. Experimental results demonstrate the faster convergence of our algorithm and corroborate our theory.
Cite
Text
Zhou et al. "Accelerated Factored Gradient Descent for Low-Rank Matrix Factorization." Artificial Intelligence and Statistics, 2020.Markdown
[Zhou et al. "Accelerated Factored Gradient Descent for Low-Rank Matrix Factorization." Artificial Intelligence and Statistics, 2020.](https://mlanthology.org/aistats/2020/zhou2020aistats-accelerated/)BibTeX
@inproceedings{zhou2020aistats-accelerated,
title = {{Accelerated Factored Gradient Descent for Low-Rank Matrix Factorization}},
author = {Zhou, Dongruo and Cao, Yuan and Gu, Quanquan},
booktitle = {Artificial Intelligence and Statistics},
year = {2020},
pages = {4430-4440},
volume = {108},
url = {https://mlanthology.org/aistats/2020/zhou2020aistats-accelerated/}
}