Object DGCNN: 3D Object Detection Using Dynamic Graphs
Abstract
3D object detection often involves complicated training and testing pipelines, which require substantial domain knowledge about individual datasets. Inspired by recent non-maximum suppression-free 2D object detection models, we propose a 3D object detection architecture on point clouds. Our method models 3D object detection as message passing on a dynamic graph, generalizing the DGCNN framework to predict a set of objects. In our construction, we remove the necessity of post-processing via object confidence aggregation or non-maximum suppression. To facilitate object detection from sparse point clouds, we also propose a set-to-set distillation approach customized to 3D detection. This approach aligns the outputs of the teacher model and the student model in a permutation-invariant fashion, significantly simplifying knowledge distillation for the 3D detection task. Our method achieves state-of-the-art performance on autonomous driving benchmarks. We also provide abundant analysis of the detection model and distillation framework.
Cite
Text
Wang and Solomon. "Object DGCNN: 3D Object Detection Using Dynamic Graphs." Neural Information Processing Systems, 2021.Markdown
[Wang and Solomon. "Object DGCNN: 3D Object Detection Using Dynamic Graphs." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/wang2021neurips-object/)BibTeX
@inproceedings{wang2021neurips-object,
title = {{Object DGCNN: 3D Object Detection Using Dynamic Graphs}},
author = {Wang, Yue and Solomon, Justin M},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/wang2021neurips-object/}
}