Efficient Convex Relaxations for Streaming PCA
Abstract
We revisit two algorithms, matrix stochastic gradient (MSG) and $\ell_2$-regularized MSG (RMSG), that are instances of stochastic gradient descent (SGD) on a convex relaxation to principal component analysis (PCA). These algorithms have been shown to outperform Oja’s algorithm, empirically, in terms of the iteration complexity, and to have runtime comparable with Oja’s. However, these findings are not supported by existing theoretical results. While the iteration complexity bound for $\ell_2$-RMSG was recently shown to match that of Oja’s algorithm, its theoretical efficiency was left as an open problem. In this work, we give improved bounds on per iteration cost of mini-batched variants of both MSG and $\ell_2$-RMSG and arrive at an algorithm with total computational complexity matching that of Oja's algorithm.
Cite
Text
Arora and Marinov. "Efficient Convex Relaxations for Streaming PCA." Neural Information Processing Systems, 2019.Markdown
[Arora and Marinov. "Efficient Convex Relaxations for Streaming PCA." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/arora2019neurips-efficient/)BibTeX
@inproceedings{arora2019neurips-efficient,
title = {{Efficient Convex Relaxations for Streaming PCA}},
author = {Arora, Raman and Marinov, Teodor Vanislavov},
booktitle = {Neural Information Processing Systems},
year = {2019},
pages = {10496-10505},
url = {https://mlanthology.org/neurips/2019/arora2019neurips-efficient/}
}