Image Correspondences Matching Using Multiple Features Fusion
Abstract
In this paper, we present a novel framework which significantly increases the accuracy of correspondences matching between two images under various image transformations. We first define a retina inspired patch-structure which mimics the human eye retina topology, and use the highly discriminative convolutional neural networks (CNNs) features to represent those patches. Then, we employ the conventional salient point methods to locate salient points, and finally, we fuse both the local descriptor of each salient point and the CNN feature from the local patch which the salient point belongs to. The evaluation results show the effectiveness of the proposed multiple features fusion (MFF) framework and that it improves the accuracy of leading approaches on two popular benchmark datasets.
Cite
Text
Wu and Lew. "Image Correspondences Matching Using Multiple Features Fusion." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-49409-8_61Markdown
[Wu and Lew. "Image Correspondences Matching Using Multiple Features Fusion." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/wu2016eccv-image/) doi:10.1007/978-3-319-49409-8_61BibTeX
@inproceedings{wu2016eccv-image,
title = {{Image Correspondences Matching Using Multiple Features Fusion}},
author = {Wu, Song and Lew, Michael S.},
booktitle = {European Conference on Computer Vision},
year = {2016},
pages = {737-746},
doi = {10.1007/978-3-319-49409-8_61},
url = {https://mlanthology.org/eccv/2016/wu2016eccv-image/}
}