Fast, Large-Scale Transformation-Invariant Clustering
Abstract
In previous work on transformed mixtures of Gaussians'' andtransformed hidden Markov models'', we showed how the EM al- gorithm in a discrete latent variable model can be used to jointly normalize data (e.g., center images, pitch-normalize spectrograms) and learn a mixture model of the normalized data. The only input to the algorithm is the data, a list of possible transformations, and the number of clusters to find. The main criticism of this work was that the exhaustive computation of the posterior probabili- ties over transformations would make scaling up to large feature vectors and large sets of transformations intractable. Here, we de- scribe how a tremendous speed-up is acheived through the use of a variational technique for decoupling transformations, and a fast Fourier transform method for computing posterior probabilities. For NN images, learning C clusters under N rotations, N scales,
Cite
Text
Frey and Jojic. "Fast, Large-Scale Transformation-Invariant Clustering." Neural Information Processing Systems, 2001.Markdown
[Frey and Jojic. "Fast, Large-Scale Transformation-Invariant Clustering." Neural Information Processing Systems, 2001.](https://mlanthology.org/neurips/2001/frey2001neurips-fast/)BibTeX
@inproceedings{frey2001neurips-fast,
title = {{Fast, Large-Scale Transformation-Invariant Clustering}},
author = {Frey, Brendan J. and Jojic, Nebojsa},
booktitle = {Neural Information Processing Systems},
year = {2001},
pages = {721-727},
url = {https://mlanthology.org/neurips/2001/frey2001neurips-fast/}
}