Bootstrap Your Own Correspondences

Abstract

Geometric feature extraction is a crucial component of point cloud registration pipelines. Recent work has demonstrated how supervised learning can be leveraged to learn better and more compact 3D features. However, those approaches' reliance on ground-truth annotation limits their scalability. We propose BYOC: a self-supervised approach that learns visual and geometric features from RGB-D video without relying on ground-truth pose or correspondence. Our key observation is that randomly-initialized CNNs readily provide us with good correspondences; allowing us to bootstrap the learning of both visual and geometric features. Our approach combines classic ideas from point cloud registration with more recent representation learning approaches. We evaluate our approach on indoor scene datasets and find that our method outperforms traditional and learned descriptors, while being competitive with current state-of-the-art supervised approaches.

Cite

Text

El Banani and Johnson. "Bootstrap Your Own Correspondences." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00637

Markdown

[El Banani and Johnson. "Bootstrap Your Own Correspondences." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/banani2021iccv-bootstrap/) doi:10.1109/ICCV48922.2021.00637

BibTeX

@inproceedings{banani2021iccv-bootstrap,
  title     = {{Bootstrap Your Own Correspondences}},
  author    = {El Banani, Mohamed and Johnson, Justin},
  booktitle = {International Conference on Computer Vision},
  year      = {2021},
  pages     = {6433-6442},
  doi       = {10.1109/ICCV48922.2021.00637},
  url       = {https://mlanthology.org/iccv/2021/banani2021iccv-bootstrap/}
}