3DFeat-Net: Weakly Supervised Local 3D Features for Point Cloud Registration

Abstract

In this paper, we propose the 3DFeat-Net which learns both 3D feature detector and descriptor for point cloud matching using weak supervision. Unlike many existing works, we do not require manually annotating matching point clusters. Instead, we leverage on alignment and attention mechanisms to learn feature correspondences from GPS/INS tagged 3D point clouds without explicitly specifying them. We create training and benchmark outdoor Lidar datasets, and our experiments on these datasets show that our 3DFeat-Net outperforms existing handcrafted and learned 3D features.

Cite

Text

Jian Yew and Hee Lee. "3DFeat-Net: Weakly Supervised Local 3D Features for Point Cloud Registration." Proceedings of the European Conference on Computer Vision (ECCV), 2018. doi:10.1007/978-3-030-01267-0_37

Markdown

[Jian Yew and Hee Lee. "3DFeat-Net: Weakly Supervised Local 3D Features for Point Cloud Registration." Proceedings of the European Conference on Computer Vision (ECCV), 2018.](https://mlanthology.org/eccv/2018/jianyew2018eccv-3dfeatnet/) doi:10.1007/978-3-030-01267-0_37

BibTeX

@inproceedings{jianyew2018eccv-3dfeatnet,
  title     = {{3DFeat-Net: Weakly Supervised Local 3D Features for Point Cloud Registration}},
  author    = {Jian Yew, Zi and Hee Lee, Gim},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2018},
  doi       = {10.1007/978-3-030-01267-0_37},
  url       = {https://mlanthology.org/eccv/2018/jianyew2018eccv-3dfeatnet/}
}