D-Nets: Beyond Patch-Based Image Descriptors
Abstract
Despite much research on patch-based descriptors, SIFT re-mains the gold standard for finding correspondences across images and recent descriptors focus primarily on improv-ing speed rather than accuracy. In this paper we pro-pose Descriptor-Nets (D-Nets), a computationally efficient method that significantly improves the accuracy of image matching by going beyond patch-based approaches. D-Nets constructs a network in which nodes correspond to tradi-tional sparsely or densely sampled keypoints, and where im-age content is sampled from selected edges in this net. Not only is our proposed representation invariant to cropping, translation, scale, reflection and rotation, but it is also sig-nificantly more robust to severe perspective and non-linear distortions. We present several variants of our algorithm, including one that tunes itself to the image complexity and an efficient parallelized variant that employs a fixed grid. Comprehensive direct comparisons against SIFT and ORB on standard datasets demonstrate that D-Nets dominates existing approaches in terms of precision and recall while retaining computational efficiency. 1.
Cite
Text
von Hundelshausen and Sukthankar. "D-Nets: Beyond Patch-Based Image Descriptors." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2012. doi:10.1109/CVPR.2012.6248022Markdown
[von Hundelshausen and Sukthankar. "D-Nets: Beyond Patch-Based Image Descriptors." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2012.](https://mlanthology.org/cvpr/2012/vonhundelshausen2012cvpr-d/) doi:10.1109/CVPR.2012.6248022BibTeX
@inproceedings{vonhundelshausen2012cvpr-d,
title = {{D-Nets: Beyond Patch-Based Image Descriptors}},
author = {von Hundelshausen, Felix and Sukthankar, Rahul},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2012},
pages = {2941-2948},
doi = {10.1109/CVPR.2012.6248022},
url = {https://mlanthology.org/cvpr/2012/vonhundelshausen2012cvpr-d/}
}