Extracting Deformation-Aware Local Features by Learning to Deform

Abstract

Despite the advances in extracting local features achieved by handcrafted and learning-based descriptors, they are still limited by the lack of invariance to non-rigid transformations. In this paper, we present a new approach to compute features from still images that are robust to non-rigid deformations to circumvent the problem of matching deformable surfaces and objects. Our deformation-aware local descriptor, named DEAL, leverages a polar sampling and a spatial transformer warping to provide invariance to rotation, scale, and image deformations. We train the model architecture end-to-end by applying isometric non-rigid deformations to objects in a simulated environment as guidance to provide highly discriminative local features. The experiments show that our method outperforms state-of-the-art handcrafted, learning-based image, and RGB-D descriptors in different datasets with both real and realistic synthetic deformable objects in still images. The source code and trained model of the descriptor are publicly available at https://www.verlab.dcc.ufmg.br/descriptors/neurips2021.

Cite

Text

Potje et al. "Extracting Deformation-Aware Local Features by Learning to Deform." Neural Information Processing Systems, 2021.

Markdown

[Potje et al. "Extracting Deformation-Aware Local Features by Learning to Deform." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/potje2021neurips-extracting/)

BibTeX

@inproceedings{potje2021neurips-extracting,
  title     = {{Extracting Deformation-Aware Local Features by Learning to Deform}},
  author    = {Potje, Guilherme and Martins, Renato and Chamone, Felipe and Nascimento, Erickson},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/potje2021neurips-extracting/}
}