Fast Algorithms for Robust PCA via Gradient Descent
Abstract
We consider the problem of Robust PCA in the fully and partially observed settings. Without corruptions, this is the well-known matrix completion problem. From a statistical standpoint this problem has been recently well-studied, and conditions on when recovery is possible (how many observations do we need, how many corruptions can we tolerate) via polynomial-time algorithms is by now understood. This paper presents and analyzes a non-convex optimization approach that greatly reduces the computational complexity of the above problems, compared to the best available algorithms. In particular, in the fully observed case, with $r$ denoting rank and $d$ dimension, we reduce the complexity from $O(r^2d^2\log(1/\epsilon))$ to $O(rd^2\log(1/\epsilon))$ -- a big savings when the rank is big. For the partially observed case, we show the complexity of our algorithm is no more than $O(r^4d\log(d)\log(1/\epsilon))$. Not only is this the best-known run-time for a provable algorithm under partial observation, but in the setting where $r$ is small compared to $d$, it also allows for near-linear-in-$d$ run-time that can be exploited in the fully-observed case as well, by simply running our algorithm on a subset of the observations.
Cite
Text
Yi et al. "Fast Algorithms for Robust PCA via Gradient Descent." Neural Information Processing Systems, 2016.Markdown
[Yi et al. "Fast Algorithms for Robust PCA via Gradient Descent." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/yi2016neurips-fast/)BibTeX
@inproceedings{yi2016neurips-fast,
title = {{Fast Algorithms for Robust PCA via Gradient Descent}},
author = {Yi, Xinyang and Park, Dohyung and Chen, Yudong and Caramanis, Constantine},
booktitle = {Neural Information Processing Systems},
year = {2016},
pages = {4152-4160},
url = {https://mlanthology.org/neurips/2016/yi2016neurips-fast/}
}