Online PCA with Optimal Regret

Abstract

We investigate the online version of Principle Component Analysis (PCA), where in each trial $t$ the learning algorithm chooses a $k$-dimensional subspace, and upon receiving the next instance vector $\x_t$, suffers the compression loss, which is the squared Euclidean distance between this instance and its projection into the chosen subspace. When viewed in the right parameterization, this compression loss is linear, i.e. it can be rewritten as $\text{tr}(\mathbf{W}_t\x_t\x_t^\top)$, where $\mathbf{W}_t$ is the parameter of the algorithm and the outer product $\x_t\x_t^\top$ (with $\|\x_t\|\le 1$) is the instance matrix. In this paper generalize PCA to arbitrary positive definite instance matrices $\mathbf{X}_t$ with the linear loss $\text{tr}(\mathbf{W}_t\X_t)$.

Cite

Text

Nie et al. "Online PCA with Optimal Regret." Journal of Machine Learning Research, 2016.

Markdown

[Nie et al. "Online PCA with Optimal Regret." Journal of Machine Learning Research, 2016.](https://mlanthology.org/jmlr/2016/nie2016jmlr-online/)

BibTeX

@article{nie2016jmlr-online,
  title     = {{Online PCA with Optimal Regret}},
  author    = {Nie, Jiazhong and Kotlowski, Wojciech and Warmuth, Manfred K.},
  journal   = {Journal of Machine Learning Research},
  year      = {2016},
  pages     = {1-49},
  volume    = {17},
  url       = {https://mlanthology.org/jmlr/2016/nie2016jmlr-online/}
}