Convergence and Rate of Convergence of a Manifold-Based Dimension Reduction Algorithm

Abstract

We study the convergence and the rate of convergence of a local manifold learning algorithm: LTSA [13]. The main technical tool is the perturbation analysis on the linear invariant subspace that corresponds to the solution of LTSA. We derive a worst-case upper bound of errors for LTSA which naturally leads to a convergence result. We then derive the rate of convergence for LTSA in a special case.

Cite

Text

Smith et al. "Convergence and Rate of Convergence of a Manifold-Based Dimension Reduction Algorithm." Neural Information Processing Systems, 2008.

Markdown

[Smith et al. "Convergence and Rate of Convergence of a Manifold-Based Dimension Reduction Algorithm." Neural Information Processing Systems, 2008.](https://mlanthology.org/neurips/2008/smith2008neurips-convergence/)

BibTeX

@inproceedings{smith2008neurips-convergence,
  title     = {{Convergence and Rate of Convergence of a Manifold-Based Dimension Reduction Algorithm}},
  author    = {Smith, Andrew and Zha, Hongyuan and Wu, Xiao-ming},
  booktitle = {Neural Information Processing Systems},
  year      = {2008},
  pages     = {1529-1536},
  url       = {https://mlanthology.org/neurips/2008/smith2008neurips-convergence/}
}