Multi-View 3D Object Detection Network for Autonomous Driving

Abstract

This paper aims at high-accuracy 3D object detection in autonomous driving scenario. We propose Multi-View 3D networks (MV3D), a sensory-fusion framework that takes both LIDAR point cloud and RGB images as input and predicts oriented 3D bounding boxes. We encode the sparse 3D point cloud with a compact multi-view representation. The network is composed of two subnetworks: one for 3D object proposal generation and another for multi-view feature fusion. The proposal network generates 3D candidate boxes efficiently from the bird's eye view representation of 3D point cloud. We design a deep fusion scheme to combine region-wise features from multiple views and enable interactions between intermediate layers of different paths. Experiments on the challenging KITTI benchmark show that our approach outperforms the state-of-the-art by around 25% and 30% AP on the tasks of 3D localization and 3D detection. In addition, for 2D detection, our approach obtains 14.9% higher AP than the state-of-the-art on the hard data among the LIDAR-based methods.

Cite

Text

Chen et al. "Multi-View 3D Object Detection Network for Autonomous Driving." Conference on Computer Vision and Pattern Recognition, 2017. doi:10.1109/CVPR.2017.691

Markdown

[Chen et al. "Multi-View 3D Object Detection Network for Autonomous Driving." Conference on Computer Vision and Pattern Recognition, 2017.](https://mlanthology.org/cvpr/2017/chen2017cvpr-multiview/) doi:10.1109/CVPR.2017.691

BibTeX

@inproceedings{chen2017cvpr-multiview,
  title     = {{Multi-View 3D Object Detection Network for Autonomous Driving}},
  author    = {Chen, Xiaozhi and Ma, Huimin and Wan, Ji and Li, Bo and Xia, Tian},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2017},
  doi       = {10.1109/CVPR.2017.691},
  url       = {https://mlanthology.org/cvpr/2017/chen2017cvpr-multiview/}
}