Feature Tracking for Wide-Baseline Image Retrieval
Abstract
We address the problem of large scale image retrieval in a wide-baseline setting, where for any query image all the matching database images will come from very different viewpoints. In such settings traditional bag-of-visual-words approaches are not equipped to handle the significant feature descriptor transformations that occur under large camera motions. In this paper we present a novel approach that includes an offline step of feature matching which allows us to observe how local descriptors transform under large camera motions. These observations are encoded in a graph in the quantized feature space. This graph can be used directly within a soft-assignment feature quantization scheme for image retrieval.
Cite
Text
Makadia. "Feature Tracking for Wide-Baseline Image Retrieval." European Conference on Computer Vision, 2010. doi:10.1007/978-3-642-15555-0_23Markdown
[Makadia. "Feature Tracking for Wide-Baseline Image Retrieval." European Conference on Computer Vision, 2010.](https://mlanthology.org/eccv/2010/makadia2010eccv-feature/) doi:10.1007/978-3-642-15555-0_23BibTeX
@inproceedings{makadia2010eccv-feature,
title = {{Feature Tracking for Wide-Baseline Image Retrieval}},
author = {Makadia, Ameesh},
booktitle = {European Conference on Computer Vision},
year = {2010},
pages = {310-323},
doi = {10.1007/978-3-642-15555-0_23},
url = {https://mlanthology.org/eccv/2010/makadia2010eccv-feature/}
}