Streaming K-PCA: Efficient Guarantees for Oja’s Algorithm, Beyond Rank-One Updates
Abstract
We analyze Oja’s algorithm for streaming $k$-PCA, and prove that it achieves performance nearly matching that of an optimal offline algorithm. Given access to a sequence of i.i.d. $d \times d$ symmetric matrices, we show that Oja’s algorithm can obtain an accurate approximation to the subspace of the top $k$ eigenvectors of their expectation using a number of samples that scales polylogarithmically with $d$. Previously, such a result was only known in the case where the updates have rank one. Our analysis is based on recently developed matrix concentration tools, which allow us to prove strong bounds on the tails of the random matrices which arise in the course of the algorithm’s execution.
Cite
Text
Huang et al. "Streaming K-PCA: Efficient Guarantees for Oja’s Algorithm, Beyond Rank-One Updates." Conference on Learning Theory, 2021.Markdown
[Huang et al. "Streaming K-PCA: Efficient Guarantees for Oja’s Algorithm, Beyond Rank-One Updates." Conference on Learning Theory, 2021.](https://mlanthology.org/colt/2021/huang2021colt-streaming/)BibTeX
@inproceedings{huang2021colt-streaming,
title = {{Streaming K-PCA: Efficient Guarantees for Oja’s Algorithm, Beyond Rank-One Updates}},
author = {Huang, De and Niles-Weed, Jonathan and Ward, Rachel},
booktitle = {Conference on Learning Theory},
year = {2021},
pages = {2463-2498},
volume = {134},
url = {https://mlanthology.org/colt/2021/huang2021colt-streaming/}
}