One-Shot Multi-Set Non-Rigid Feature-Spatial Matching
Abstract
We introduce a novel framework for nonrigid feature matching among multiple sets in a way that takes into consideration both the feature descriptor and the features spatial arrangement. We learn an embedded representation that combines both the descriptor similarity and the spatial arrangement in a unified Euclidean embedding space. This unified embedding is reached by minimizing an objective function that has two sources of weights; the feature spatial arrangement and the feature descriptor similarity scores across the different sets. The solution can be obtained directly by solving one Eigen-value problem that is linear in the number of features. Therefore, the framework is very efficient and can scale up to handle a large number of features. Experimental evaluation is done using different sets showing outstanding results compared to the state of the art; up to 100% accuracy is achieved in the case of the well known `Hotel' sequence.
Cite
Text
Torki and Elgammal. "One-Shot Multi-Set Non-Rigid Feature-Spatial Matching." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2010. doi:10.1109/CVPR.2010.5540059Markdown
[Torki and Elgammal. "One-Shot Multi-Set Non-Rigid Feature-Spatial Matching." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2010.](https://mlanthology.org/cvpr/2010/torki2010cvpr-one/) doi:10.1109/CVPR.2010.5540059BibTeX
@inproceedings{torki2010cvpr-one,
title = {{One-Shot Multi-Set Non-Rigid Feature-Spatial Matching}},
author = {Torki, Marwan and Elgammal, Ahmed M.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2010},
pages = {3058-3065},
doi = {10.1109/CVPR.2010.5540059},
url = {https://mlanthology.org/cvpr/2010/torki2010cvpr-one/}
}