A Convergent Gradient Descent Algorithm for Rank Minimization and Semidefinite Programming from Random Linear Measurements
Abstract
We propose a simple, scalable, and fast gradient descent algorithm to optimize a nonconvex objective for the rank minimization problem and a closely related family of semidefinite programs. With $O(r^3 \kappa^2 n \log n)$ random measurements of a positive semidefinite $n\times n$ matrix of rank $r$ and condition number $\kappa$, our method is guaranteed to converge linearly to the global optimum.
Cite
Text
Zheng and Lafferty. "A Convergent Gradient Descent Algorithm for Rank Minimization and Semidefinite Programming from Random Linear Measurements." Neural Information Processing Systems, 2015.Markdown
[Zheng and Lafferty. "A Convergent Gradient Descent Algorithm for Rank Minimization and Semidefinite Programming from Random Linear Measurements." Neural Information Processing Systems, 2015.](https://mlanthology.org/neurips/2015/zheng2015neurips-convergent/)BibTeX
@inproceedings{zheng2015neurips-convergent,
title = {{A Convergent Gradient Descent Algorithm for Rank Minimization and Semidefinite Programming from Random Linear Measurements}},
author = {Zheng, Qinqing and Lafferty, John},
booktitle = {Neural Information Processing Systems},
year = {2015},
pages = {109-117},
url = {https://mlanthology.org/neurips/2015/zheng2015neurips-convergent/}
}