The Fast Convergence of Incremental PCA
Abstract
We prove the first finite-sample convergence rates for any incremental PCA algorithm using sub-quadratic time and memory per iteration. The algorithm analyzed is Oja's learning rule, an efficient and well-known scheme for estimating the top principal component. Our analysis of this non-convex problem yields expected and high-probability convergence rates of $\tilde{O}(1/n)$ through a novel technique. We relate our guarantees to existing rates for stochastic gradient descent on strongly convex functions, and extend those results. We also include experiments which demonstrate convergence behaviors predicted by our analysis.
Cite
Text
Balsubramani et al. "The Fast Convergence of Incremental PCA." Neural Information Processing Systems, 2013.Markdown
[Balsubramani et al. "The Fast Convergence of Incremental PCA." Neural Information Processing Systems, 2013.](https://mlanthology.org/neurips/2013/balsubramani2013neurips-fast/)BibTeX
@inproceedings{balsubramani2013neurips-fast,
title = {{The Fast Convergence of Incremental PCA}},
author = {Balsubramani, Akshay and Dasgupta, Sanjoy and Freund, Yoav},
booktitle = {Neural Information Processing Systems},
year = {2013},
pages = {3174-3182},
url = {https://mlanthology.org/neurips/2013/balsubramani2013neurips-fast/}
}