Connecting the Out-of-Sample and Pre-Image Problems in Kernel Methods
Abstract
Kernel methods have been widely studied in the field of pattern recognition. These methods implicitly map, "the kernel trick," the data into a space which is more appropriate for analysis. Many manifold learning and dimensionality reduction techniques are simply kernel methods for which the mapping is explicitly computed. In such cases, two problems related with the mapping arise: The out-of-sample extension and the pre-image computation. In this paper we propose a new pre-image method based on the Nystrom formulation for the out-of-sample extension, showing the connections between both problems. We also address the importance of normalization in the feature space, which has been ignored by standard pre-image algorithms. As an example, we apply these ideas to the Gaussian kernel, and relate our approach to other popular pre-image methods. Finally, we show the application of these techniques in the study of dynamic shapes.
Cite
Text
Arias et al. "Connecting the Out-of-Sample and Pre-Image Problems in Kernel Methods." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2007. doi:10.1109/CVPR.2007.383038Markdown
[Arias et al. "Connecting the Out-of-Sample and Pre-Image Problems in Kernel Methods." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2007.](https://mlanthology.org/cvpr/2007/arias2007cvpr-connecting/) doi:10.1109/CVPR.2007.383038BibTeX
@inproceedings{arias2007cvpr-connecting,
title = {{Connecting the Out-of-Sample and Pre-Image Problems in Kernel Methods}},
author = {Arias, Pablo and Randall, Gregory and Sapiro, Guillermo},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2007},
doi = {10.1109/CVPR.2007.383038},
url = {https://mlanthology.org/cvpr/2007/arias2007cvpr-connecting/}
}