Exponentially Convergent Stochastic K-PCA Without Variance Reduction

Abstract

We present Matrix Krasulina, an algorithm for online k-PCA, by gen- eralizing the classic Krasulina’s method (Krasulina, 1969) from vector to matrix case. We show, both theoretically and empirically, that the algorithm naturally adapts to data low-rankness and converges exponentially fast to the ground-truth principal subspace. Notably, our result suggests that despite various recent efforts to accelerate the convergence of stochastic-gradient based methods by adding a O(n)-time variance reduction step, for the k- PCA problem, a truly online SGD variant suffices to achieve exponential convergence on intrinsically low-rank data.

Cite

Text

Tang. "Exponentially Convergent Stochastic K-PCA Without Variance Reduction." Neural Information Processing Systems, 2019.

Markdown

[Tang. "Exponentially Convergent Stochastic K-PCA Without Variance Reduction." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/tang2019neurips-exponentially/)

BibTeX

@inproceedings{tang2019neurips-exponentially,
  title     = {{Exponentially Convergent Stochastic K-PCA Without Variance Reduction}},
  author    = {Tang, Cheng},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {12393-12404},
  url       = {https://mlanthology.org/neurips/2019/tang2019neurips-exponentially/}
}