Picking the Best DAISY
Abstract
Local image descriptors that are highly discriminative, computational efficient, and with low storage footprint have long been a dream goal of computer vision research. In this paper, we focus on learning such descriptors, which make use of the DAISY configuration and are simple to compute both sparsely and densely. We develop a new training set of match/non-match image patches which improves on previous work. We test a wide variety of gradient and steerable filter based configurations and optimize over all parameters to obtain low matching errors for the descriptors. We further explore robust normalization, dimension reduction and dynamic range reduction to increase the discriminative power and yet reduce the storage requirement of the learned descriptors. All these enable us to obtain highly efficient local descriptors: e.g, 13.2% error at 13 bytes storage per descriptor, compared with 26.1% error at 128 bytes for SIFT.
Cite
Text
Winder et al. "Picking the Best DAISY." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2009. doi:10.1109/CVPR.2009.5206839Markdown
[Winder et al. "Picking the Best DAISY." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2009.](https://mlanthology.org/cvpr/2009/winder2009cvpr-picking/) doi:10.1109/CVPR.2009.5206839BibTeX
@inproceedings{winder2009cvpr-picking,
title = {{Picking the Best DAISY}},
author = {Winder, Simon A. J. and Hua, Gang and Brown, Matthew A.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2009},
pages = {178-185},
doi = {10.1109/CVPR.2009.5206839},
url = {https://mlanthology.org/cvpr/2009/winder2009cvpr-picking/}
}