ECO-TR: Efficient Correspondences Finding via Coarse-to-Fine Refinement
Abstract
Abstract. Modeling sparse and dense image matching within a unified functional model has recently attracted increasing research interest. However, existing efforts mainly focus on improving matching accuracy while ignoring its efficiency, which is crucial for real-world applications. In this paper, we propose an efficient structure named Efficient Correspondence Transformer (ECO-TR) by finding correspondences in a coarse-to-fine manner, which significantly improves the efficiency of functional model. To achieve this, multiple transformer blocks are stage-wisely connected to gradually refine the predicted coordinates upon a shared multi-scale feature extraction network. Given a pair of images and for arbitrary query coordinates, all the correspondences are predicted within a single feed-forward pass. We further propose an adaptive query-clustering strategy and an uncertainty-based outlier detection module to cooperate with the proposed framework for faster and better predictions. Experiments on various sparse and dense matching tasks demonstrate the superiority of our method in both efficiency and effectiveness against existing state-of-the-arts. Project page: https://dltan7.github.io/ecotr/.
Cite
Text
Tan et al. "ECO-TR: Efficient Correspondences Finding via Coarse-to-Fine Refinement." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-20080-9_19Markdown
[Tan et al. "ECO-TR: Efficient Correspondences Finding via Coarse-to-Fine Refinement." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/tan2022eccv-ecotr/) doi:10.1007/978-3-031-20080-9_19BibTeX
@inproceedings{tan2022eccv-ecotr,
title = {{ECO-TR: Efficient Correspondences Finding via Coarse-to-Fine Refinement}},
author = {Tan, Dongli and Liu, Jiang-Jiang and Chen, Xingyu and Chen, Chao and Zhang, Ruixin and Shen, Yunhang and Ding, Shouhong and Ji, Rongrong},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-20080-9_19},
url = {https://mlanthology.org/eccv/2022/tan2022eccv-ecotr/}
}