Learning Invariances with Stationary Subspace Analysis

Abstract

Recently, a novel subspace decomposition method, termed 'Stationary Subspace Analysis' (SSA), has been proposed by Bünau et al. [10]. SSA aims to find a linear projection to a lower dimensional subspace such that the distribution of the projected data does not change over successive epochs or sub-datasets. We show that by modifying the loss function and the optimization procedure we can obtain an algorithm that is both faster and more accurate. We discuss the problem of indeterminacies and provide a lower bound on the number of epochs that is needed. Finally, we show in an experiment with simulated image patches, that SSA can be used favourably in invariance learning.

Cite

Text

Meinecke et al. "Learning Invariances with Stationary Subspace Analysis." IEEE/CVF International Conference on Computer Vision Workshops, 2009. doi:10.1109/ICCVW.2009.5457715

Markdown

[Meinecke et al. "Learning Invariances with Stationary Subspace Analysis." IEEE/CVF International Conference on Computer Vision Workshops, 2009.](https://mlanthology.org/iccvw/2009/meinecke2009iccvw-learning/) doi:10.1109/ICCVW.2009.5457715

BibTeX

@inproceedings{meinecke2009iccvw-learning,
  title     = {{Learning Invariances with Stationary Subspace Analysis}},
  author    = {Meinecke, Frank C. and von Bünau, Paul and Kawanabe, Motoaki and Müller, Klaus-Robert},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2009},
  pages     = {87-92},
  doi       = {10.1109/ICCVW.2009.5457715},
  url       = {https://mlanthology.org/iccvw/2009/meinecke2009iccvw-learning/}
}