S2DNet: Learning Image Features for Accurate Sparse-to-Dense Matching

Abstract

Establishing robust and accurate correspondences is a fundamental backbone to many computer vision algorithms. While recent learning-based feature matching methods have shown promising results in providing robust correspondences under challenging conditions, they are often limited in terms of precision. In this paper, we introduce S2DNet, a novel feature matching pipeline, designed and trained to efficiently establish both robust and accurate correspondences. By leveraging a sparse-to-dense matching paradigm, we cast the correspondence learning problem as a supervised classification task to learn to output highly peaked correspondence maps. We show that S2DNet achieves state-of-the-art results on the HPatches benchmark, as well as on several long-term visual localization datasets.

Cite

Text

Germain et al. "S2DNet: Learning Image Features for Accurate Sparse-to-Dense Matching." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58580-8_37

Markdown

[Germain et al. "S2DNet: Learning Image Features for Accurate Sparse-to-Dense Matching." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/germain2020eccv-s2dnet/) doi:10.1007/978-3-030-58580-8_37

BibTeX

@inproceedings{germain2020eccv-s2dnet,
  title     = {{S2DNet: Learning Image Features for Accurate Sparse-to-Dense Matching}},
  author    = {Germain, Hugo and Bourmaud, Guillaume and Lepetit, Vincent},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58580-8_37},
  url       = {https://mlanthology.org/eccv/2020/germain2020eccv-s2dnet/}
}