Predicting Image Matching Using Affine Distortion Models
Abstract
We propose a novel method for predicting whether an image taken from a given location will match an existing set of images. This problem appears prominently in image based localization and augmented reality applications where new images are matched to an existing set to determine location or add virtual information into a scene. Our process generates a spatial coverage map showing the confidence that images taken at specific locations will match an existing image set. A new way to measure distortion between images using affine models is introduced. The distortion measure is combined with existing machine learning and structure from motion techniques to create a matching confidence predictor. The predictor is used to generate the spatial coverage map and also compute which images in the original set are redundant and can be removed. Results are presented showing the predictor is more accurate than previously published approaches.
Cite
Text
Fleck and Duric. "Predicting Image Matching Using Affine Distortion Models." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2011. doi:10.1109/CVPR.2011.5995389Markdown
[Fleck and Duric. "Predicting Image Matching Using Affine Distortion Models." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2011.](https://mlanthology.org/cvpr/2011/fleck2011cvpr-predicting/) doi:10.1109/CVPR.2011.5995389BibTeX
@inproceedings{fleck2011cvpr-predicting,
title = {{Predicting Image Matching Using Affine Distortion Models}},
author = {Fleck, Daniel and Duric, Zoran},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2011},
pages = {105-112},
doi = {10.1109/CVPR.2011.5995389},
url = {https://mlanthology.org/cvpr/2011/fleck2011cvpr-predicting/}
}