CodedVTR: Codebook-Based Sparse Voxel Transformer with Geometric Guidance
Abstract
Transformers have gained much attention by outperforming convolutional neural networks in many 2D vision tasks. However, they are known to have generalization problems and rely on massive-scale pre-training and sophisticated training techniques. When applying to 3D tasks, the irregular data structure and limited data scale add to the difficulty of transformer's application. We propose Codebook-based Voxel TRansformer), which improves data efficiency and generalization ability for 3D sparse voxel transformers. On the one hand, we propose the codebook-based attention that projects an attention space into its subspace represented by the combination of "prototypes" in a learnable codebook. It regularizes attention learning and improves generalization. On the other hand, we propose geometry-aware self-attention that utilizes geometric information (geometric pattern, density) to guide attention learning. CodedVTR could be embedded into existing sparse convolution-based methods, and bring consistent performance improvements for indoor and outdoor 3D semantic segmentation tasks.
Cite
Text
Zhao et al. "CodedVTR: Codebook-Based Sparse Voxel Transformer with Geometric Guidance." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00149Markdown
[Zhao et al. "CodedVTR: Codebook-Based Sparse Voxel Transformer with Geometric Guidance." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/zhao2022cvpr-codedvtr/) doi:10.1109/CVPR52688.2022.00149BibTeX
@inproceedings{zhao2022cvpr-codedvtr,
title = {{CodedVTR: Codebook-Based Sparse Voxel Transformer with Geometric Guidance}},
author = {Zhao, Tianchen and Zhang, Niansong and Ning, Xuefei and Wang, He and Yi, Li and Wang, Yu},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2022},
pages = {1435-1444},
doi = {10.1109/CVPR52688.2022.00149},
url = {https://mlanthology.org/cvpr/2022/zhao2022cvpr-codedvtr/}
}