Dimensionality Reduction and Principal Surfaces via Kernel mAP Manifolds
Abstract
We present a manifold learning approach to dimensionality reduction that explicitly models the manifold as a mapping from low to high dimensional space. The manifold is represented as a parametrized surface represented by a set of parameters that are defined on the input samples. The representation also provides a natural mapping from high to low dimensional space, and a concatenation of these two mappings induces a projection operator onto the manifold. The explicit projection operator allows for a clearly defined objective function in terms of projection distance and reconstruction error. A formulation of the mappings in terms of kernel regression permits a direct optimization of the objective function and the extremal points converge to principal surfaces as the number of data to learn from increases. Principal surfaces have the desirable property that they, informally speaking, pass through the middle of a distribution. We provide a proof on the convergence to principal surfaces and illustrate the effectiveness of the proposed approach on synthetic and real data sets.
Cite
Text
Gerber et al. "Dimensionality Reduction and Principal Surfaces via Kernel mAP Manifolds." IEEE/CVF International Conference on Computer Vision, 2009. doi:10.1109/ICCV.2009.5459193Markdown
[Gerber et al. "Dimensionality Reduction and Principal Surfaces via Kernel mAP Manifolds." IEEE/CVF International Conference on Computer Vision, 2009.](https://mlanthology.org/iccv/2009/gerber2009iccv-dimensionality/) doi:10.1109/ICCV.2009.5459193BibTeX
@inproceedings{gerber2009iccv-dimensionality,
title = {{Dimensionality Reduction and Principal Surfaces via Kernel mAP Manifolds}},
author = {Gerber, Samuel and Tasdizen, Tolga and Whitaker, Ross T.},
booktitle = {IEEE/CVF International Conference on Computer Vision},
year = {2009},
pages = {529-536},
doi = {10.1109/ICCV.2009.5459193},
url = {https://mlanthology.org/iccv/2009/gerber2009iccv-dimensionality/}
}