Geodesic Self-Attention for 3D Point Clouds

Abstract

Due to the outstanding competence in capturing long-range relationships, self-attention mechanism has achieved remarkable progress in point cloud tasks. Nevertheless, point cloud object often has complex non-Euclidean spatial structures, with the behavior changing dynamically and unpredictably. Most current self-attention modules highly rely on the dot product multiplication in Euclidean space, which cannot capture internal non-Euclidean structures of point cloud objects, especially the long-range relationships along the curve of the implicit manifold surface represented by point cloud objects. To address this problem, in this paper, we introduce a novel metric on the Riemannian manifold to capture the long-range geometrical dependencies of point cloud objects to replace traditional self-attention modules, namely, the Geodesic Self-Attention (GSA) module. Our approach achieves state-of-the-art performance compared to point cloud Transformers on object classification, few-shot classification and part segmentation benchmarks.

Cite

Text

Li et al. "Geodesic Self-Attention for 3D Point Clouds." Neural Information Processing Systems, 2022.

Markdown

[Li et al. "Geodesic Self-Attention for 3D Point Clouds." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/li2022neurips-geodesic/)

BibTeX

@inproceedings{li2022neurips-geodesic,
  title     = {{Geodesic Self-Attention for 3D Point Clouds}},
  author    = {Li, Zhengyu and Tang, Xuan and Xu, Zihao and Wang, Xihao and Yu, Hui and Chen, Mingsong and Wei, Xian},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/li2022neurips-geodesic/}
}