Semi-Supervised Learning on Riemannian Manifolds

Abstract

We consider the general problem of utilizing both labeled and unlabeled data to improve classification accuracy. Under the assumption that the data lie on a submanifold in a high dimensional space, we develop an algorithmic framework to classify a partially labeled data set in a principled manner. The central idea of our approach is that classification functions are naturally defined only on the submanifold in question rather than the total ambient space. Using the Laplace-Beltrami operator one produces a basis (the Laplacian Eigenmaps) for a Hilbert space of square integrable functions on the submanifold. To recover such a basis, only unlabeled examples are required. Once such a basis is obtained, training can be performed using the labeled data set. Our algorithm models the manifold using the adjacency graph for the data and approximates the Laplace-Beltrami operator by the graph Laplacian. We provide details of the algorithm, its theoretical justification, and several practical applications for image, speech, and text classification.

Cite

Text

Belkin and Niyogi. "Semi-Supervised Learning on Riemannian Manifolds." Machine Learning, 2004. doi:10.1023/B:MACH.0000033120.25363.1E

Markdown

[Belkin and Niyogi. "Semi-Supervised Learning on Riemannian Manifolds." Machine Learning, 2004.](https://mlanthology.org/mlj/2004/belkin2004mlj-semisupervised/) doi:10.1023/B:MACH.0000033120.25363.1E

BibTeX

@article{belkin2004mlj-semisupervised,
  title     = {{Semi-Supervised Learning on Riemannian Manifolds}},
  author    = {Belkin, Mikhail and Niyogi, Partha},
  journal   = {Machine Learning},
  year      = {2004},
  pages     = {209-239},
  doi       = {10.1023/B:MACH.0000033120.25363.1E},
  volume    = {56},
  url       = {https://mlanthology.org/mlj/2004/belkin2004mlj-semisupervised/}
}