Ultra-Wide Baseline Facade Matching for Geo-Localization
Abstract
Matching street-level images to a database of airborne images is hard because of extreme viewpoint and illumination differences. Color/gradient distributions or local descriptors fail to match forcing us to rely on the structure of self-similarity of patterns on facades. We propose to capture this structure with a novel “scale-selective self-similarity” ( S ^4) descriptor which is computed at each point on the facade at its inherent scale. To achieve this, we introduce a new method for scale selection which enables the extraction and segmentation of facades as well. Matching is done with a Bayesian classification of the street-view query S ^4 descriptors given all labeled descriptors in the bird’s-eye-view database. We show experimental results on retrieval accuracy on a challenging set of publicly available imagery and compare with standard SIFT-based techniques.
Cite
Text
Bansal et al. "Ultra-Wide Baseline Facade Matching for Geo-Localization." European Conference on Computer Vision, 2012. doi:10.1007/978-3-642-33863-2_18Markdown
[Bansal et al. "Ultra-Wide Baseline Facade Matching for Geo-Localization." European Conference on Computer Vision, 2012.](https://mlanthology.org/eccv/2012/bansal2012eccv-ultra/) doi:10.1007/978-3-642-33863-2_18BibTeX
@inproceedings{bansal2012eccv-ultra,
title = {{Ultra-Wide Baseline Facade Matching for Geo-Localization}},
author = {Bansal, Mayank and Daniilidis, Kostas and Sawhney, Harpreet S.},
booktitle = {European Conference on Computer Vision},
year = {2012},
pages = {175-186},
doi = {10.1007/978-3-642-33863-2_18},
url = {https://mlanthology.org/eccv/2012/bansal2012eccv-ultra/}
}