Fast Transformation-Invariant Factor Analysis
Abstract
Dimensionality reduction techniques such as principal component analy- sis and factor analysis are used to discover a linear mapping between high dimensional data samples and points in a lower dimensional subspace. In [6], Jojic and Frey introduced mixture of transformation-invariant component analyzers (MTCA) that can account for global transforma- tions such as translations and rotations, perform clustering and learn lo- cal appearance deformations by dimensionality reduction. However, due to enormous computational requirements of the EM algorithm for learn- ing the model, O( is the dimensionality of a data sample, MTCA was not practical for most applications. In this paper, we demon- strate how fast Fourier transforms can reduce the computation to the or- . With this speedup, we show the effectiveness of MTCA der of in various applications - tracking, video textures, clustering video se- quences, object recognition, and object detection in images.
Cite
Text
Kannan et al. "Fast Transformation-Invariant Factor Analysis." Neural Information Processing Systems, 2002.Markdown
[Kannan et al. "Fast Transformation-Invariant Factor Analysis." Neural Information Processing Systems, 2002.](https://mlanthology.org/neurips/2002/kannan2002neurips-fast/)BibTeX
@inproceedings{kannan2002neurips-fast,
title = {{Fast Transformation-Invariant Factor Analysis}},
author = {Kannan, Anitha and Jojic, Nebojsa and Frey, Brendan},
booktitle = {Neural Information Processing Systems},
year = {2002},
pages = {1287-1294},
url = {https://mlanthology.org/neurips/2002/kannan2002neurips-fast/}
}