Towards a Theoretical Foundation for Laplacian-Based Manifold Methods
Abstract
In recent years manifold methods have attracted a considerable amount of attention in machine learning. However most algorithms in that class may be termed “manifold-motivated” as they lack any explicit theoretical guarantees. In this paper we take a step towards closing the gap between theory and practice for a class of Laplacian-based manifold methods. We show that under certain conditions the graph Laplacian of a point cloud converges to the Laplace-Beltrami operator on the underlying manifold. Theorem 1 contains the first result showing convergence of a random graph Laplacian to manifold Laplacian in the machine learning context.
Cite
Text
Belkin and Niyogi. "Towards a Theoretical Foundation for Laplacian-Based Manifold Methods." Annual Conference on Computational Learning Theory, 2005. doi:10.1007/11503415_33Markdown
[Belkin and Niyogi. "Towards a Theoretical Foundation for Laplacian-Based Manifold Methods." Annual Conference on Computational Learning Theory, 2005.](https://mlanthology.org/colt/2005/belkin2005colt-theoretical/) doi:10.1007/11503415_33BibTeX
@inproceedings{belkin2005colt-theoretical,
title = {{Towards a Theoretical Foundation for Laplacian-Based Manifold Methods}},
author = {Belkin, Mikhail and Niyogi, Partha},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2005},
pages = {486-500},
doi = {10.1007/11503415_33},
url = {https://mlanthology.org/colt/2005/belkin2005colt-theoretical/}
}