SOE-Net: A Self-Attention and Orientation Encoding Network for Point Cloud Based Place Recognition
Abstract
We tackle the problem of place recognition from point cloud data and introduce a self-attention and orientation encoding network (SOE-Net) that fully explores the relationship between points and incorporates long-range context into point-wise local descriptors. Local information of each point from eight orientations is captured in a PointOE module, whereas long-range feature dependencies among local descriptors are captured with a self-attention unit. Moreover, we propose a novel loss function called Hard Positive Hard Negative quadruplet loss (HPHN quadruplet), that achieves better performance than the commonly used metric learning loss. Experiments on various benchmark datasets demonstrate superior performance of the proposed network over the current state-of-the-art approaches. Our code is released publicly at https://github.com/Yan-Xia/SOE-Net.
Cite
Text
Xia et al. "SOE-Net: A Self-Attention and Orientation Encoding Network for Point Cloud Based Place Recognition." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.01119Markdown
[Xia et al. "SOE-Net: A Self-Attention and Orientation Encoding Network for Point Cloud Based Place Recognition." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/xia2021cvpr-soenet/) doi:10.1109/CVPR46437.2021.01119BibTeX
@inproceedings{xia2021cvpr-soenet,
title = {{SOE-Net: A Self-Attention and Orientation Encoding Network for Point Cloud Based Place Recognition}},
author = {Xia, Yan and Xu, Yusheng and Li, Shuang and Wang, Rui and Du, Juan and Cremers, Daniel and Stilla, Uwe},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2021},
pages = {11348-11357},
doi = {10.1109/CVPR46437.2021.01119},
url = {https://mlanthology.org/cvpr/2021/xia2021cvpr-soenet/}
}