A Novel Stochastic Gradient Descent Algorithm for LearningPrincipal Subspaces

Abstract

In this paper, we derive an algorithm that learns a principal subspace from sample entries, can be applied when the approximate subspace is represented by a neural network, and hence can bescaled to datasets with an effectively infinite number of rows and columns. Our method consistsin defining a loss function whose minimizer is the desired principal subspace, and constructing agradient estimate of this loss whose bias can be controlled.

Cite

Text

Le Lan et al. "A Novel Stochastic Gradient Descent Algorithm for LearningPrincipal Subspaces." NeurIPS 2022 Workshops: OPT, 2022.

Markdown

[Le Lan et al. "A Novel Stochastic Gradient Descent Algorithm for LearningPrincipal Subspaces." NeurIPS 2022 Workshops: OPT, 2022.](https://mlanthology.org/neuripsw/2022/lan2022neuripsw-novel/)

BibTeX

@inproceedings{lan2022neuripsw-novel,
  title     = {{A Novel Stochastic Gradient Descent Algorithm for LearningPrincipal Subspaces}},
  author    = {Le Lan, Charline and Greaves, Joshua and Farebrother, Jesse and Rowland, Mark and Pedregosa, Fabian and Agarwal, Rishabh and Bellemare, Marc G},
  booktitle = {NeurIPS 2022 Workshops: OPT},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/lan2022neuripsw-novel/}
}