Transformation-Equivariant 3D Object Detection for Autonomous Driving

Abstract

3D object detection received increasing attention in autonomous driving recently. Objects in 3D scenes are distributed with diverse orientations. Ordinary detectors do not explicitly model the variations of rotation and reflection transformations. Consequently, large networks and extensive data augmentation are required for robust detection. Recent equivariant networks explicitly model the transformation variations by applying shared networks on multiple transformed point clouds, showing great potential in object geometry modeling. However, it is difficult to apply such networks to 3D object detection in autonomous driving due to its large computation cost and slow reasoning speed. In this work, we present TED, an efficient Transformation-Equivariant 3D Detector to overcome the computation cost and speed issues. TED first applies a sparse convolution backbone to extract multi-channel transformation-equivariant voxel features; and then aligns and aggregates these equivariant features into lightweight and compact representations for high-performance 3D object detection. On the highly competitive KITTI 3D car detection leaderboard, TED ranked 1st among all submissions with competitive efficiency. Code is available at https://github.com/hailanyi/TED.

Cite

Text

Wu et al. "Transformation-Equivariant 3D Object Detection for Autonomous Driving." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I3.25380

Markdown

[Wu et al. "Transformation-Equivariant 3D Object Detection for Autonomous Driving." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/wu2023aaai-transformation/) doi:10.1609/AAAI.V37I3.25380

BibTeX

@inproceedings{wu2023aaai-transformation,
  title     = {{Transformation-Equivariant 3D Object Detection for Autonomous Driving}},
  author    = {Wu, Hai and Wen, Chenglu and Li, Wei and Li, Xin and Yang, Ruigang and Wang, Cheng},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {2795-2802},
  doi       = {10.1609/AAAI.V37I3.25380},
  url       = {https://mlanthology.org/aaai/2023/wu2023aaai-transformation/}
}