Sensor Fusion for Joint 3D Object Detection and Semantic Segmentation

Abstract

In this paper, we present an extension to LaserNet, an efficient and state-of-the-art LiDAR based 3D object detector. We propose a method for fusing image data with the LiDAR data and show that this sensor fusion method improves the detection performance of the model especially at long ranges. The addition of image data is straightforward and does not require image labels. Furthermore, we expand the capabilities of the model to perform 3D semantic segmentation in addition to 3D object detection. On a large benchmark dataset, we demonstrate our approach achieves state-of-the-art performance on both object detection and semantic segmentation while maintaining a low runtime.

Cite

Text

Meyer et al. "Sensor Fusion for Joint 3D Object Detection and Semantic Segmentation." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019. doi:10.1109/CVPRW.2019.00162

Markdown

[Meyer et al. "Sensor Fusion for Joint 3D Object Detection and Semantic Segmentation." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.](https://mlanthology.org/cvprw/2019/meyer2019cvprw-sensor/) doi:10.1109/CVPRW.2019.00162

BibTeX

@inproceedings{meyer2019cvprw-sensor,
  title     = {{Sensor Fusion for Joint 3D Object Detection and Semantic Segmentation}},
  author    = {Meyer, Gregory P. and Charland, Jake and Hegde, Darshan and Laddha, Ankit and Vallespi-Gonzalez, Carlos},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2019},
  pages     = {1230-1237},
  doi       = {10.1109/CVPRW.2019.00162},
  url       = {https://mlanthology.org/cvprw/2019/meyer2019cvprw-sensor/}
}