Online PCA with Optimal Regrets

Abstract

We carefully investigate the online version of PCA, where in each trial a learning algorithm plays a k-dimensional subspace, and suffers the compression loss on the next instance when projected into the chosen subspace. In this setting, we give regret bounds for two popular online algorithms, Gradient Descent (GD) and Matrix Exponentiated Gradient (MEG). We show that both algorithms are essentially optimal in the worst-case when the regret is expressed as a function of the number of trials. This comes as a surprise, since MEG is commonly believed to perform sub-optimally when the instances are sparse. This different behavior of MEG for PCA is mainly related to the non-negativity of the loss in this case, which makes the PCA setting qualitatively different from other settings studied in the literature. Furthermore, we show that when considering regret bounds as a function of a loss budget, MEG remains optimal and strictly outperforms GD.

Cite

Text

Nie et al. "Online PCA with Optimal Regrets." International Conference on Algorithmic Learning Theory, 2013. doi:10.1007/978-3-642-40935-6_8

Markdown

[Nie et al. "Online PCA with Optimal Regrets." International Conference on Algorithmic Learning Theory, 2013.](https://mlanthology.org/alt/2013/nie2013alt-online/) doi:10.1007/978-3-642-40935-6_8

BibTeX

@inproceedings{nie2013alt-online,
  title     = {{Online PCA with Optimal Regrets}},
  author    = {Nie, Jiazhong and Kotlowski, Wojciech and Warmuth, Manfred K.},
  booktitle = {International Conference on Algorithmic Learning Theory},
  year      = {2013},
  pages     = {98-112},
  doi       = {10.1007/978-3-642-40935-6_8},
  url       = {https://mlanthology.org/alt/2013/nie2013alt-online/}
}