Global Coordination of Local Linear Models

Abstract

High dimensional data that lies on or near a low dimensional manifold can be de- scribed by a collection of local linear models. Such a description, however, does not provide a global parameterization of the manifold—arguably an important goal of unsupervised learning. In this paper, we show how to learn a collection of local linear models that solves this more difficult problem. Our local linear models are represented by a mixture of factor analyzers, and the “global coordi- nation” of these models is achieved by adding a regularizing term to the standard maximum likelihood objective function. The regularizer breaks a degeneracy in the mixture model’s parameter space, favoring models whose internal coor- dinate systems are aligned in a consistent way. As a result, the internal coor- dinates change smoothly and continuously as one traverses a connected path on the manifold—even when the path crosses the domains of many different local models. The regularizer takes the form of a Kullback-Leibler divergence and illustrates an unexpected application of variational methods: not to perform ap- proximate inference in intractable probabilistic models, but to learn more useful internal representations in tractable ones.

Cite

Text

Roweis et al. "Global Coordination of Local Linear Models." Neural Information Processing Systems, 2001.

Markdown

[Roweis et al. "Global Coordination of Local Linear Models." Neural Information Processing Systems, 2001.](https://mlanthology.org/neurips/2001/roweis2001neurips-global/)

BibTeX

@inproceedings{roweis2001neurips-global,
  title     = {{Global Coordination of Local Linear Models}},
  author    = {Roweis, Sam T. and Saul, Lawrence K. and Hinton, Geoffrey E.},
  booktitle = {Neural Information Processing Systems},
  year      = {2001},
  pages     = {889-896},
  url       = {https://mlanthology.org/neurips/2001/roweis2001neurips-global/}
}