Putting Local Features on a Manifold
Abstract
Local features have proven very useful for recognition. Manifold learning has proven to be a very powerful tool in data analysis. However, manifold learning application for images are mainly based on holistic vectorized representations of images. The challenging question that we address in this paper is how can we learn image manifolds from a punch of local features in a smooth way that captures the feature similarity and spatial arrangement variability between images. We introduce a novel framework for learning a manifold representation from collections of local features in images. We first show how we can learn a feature embedding representation that preserves both the local appearance similarity as well as the spatial structure of the features. We also show how we can embed features from a new image by introducing a solution for the out-of-sample that is suitable for this context. By solving these two problems and defining a proper distance measure in the feature embedding space, we can reach an image manifold embedding space.
Cite
Text
Torki and Elgammal. "Putting Local Features on a Manifold." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2010. doi:10.1109/CVPR.2010.5539843Markdown
[Torki and Elgammal. "Putting Local Features on a Manifold." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2010.](https://mlanthology.org/cvpr/2010/torki2010cvpr-putting/) doi:10.1109/CVPR.2010.5539843BibTeX
@inproceedings{torki2010cvpr-putting,
title = {{Putting Local Features on a Manifold}},
author = {Torki, Marwan and Elgammal, Ahmed M.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2010},
pages = {1743-1750},
doi = {10.1109/CVPR.2010.5539843},
url = {https://mlanthology.org/cvpr/2010/torki2010cvpr-putting/}
}