Diversity Matters: Fully Exploiting Depth Clues for Reliable Monocular 3D Object Detection

Abstract

As an inherently ill-posed problem, depth estimation from single images is the most challenging part of monocular 3D object detection (M3OD). Many existing methods rely on preconceived assumptions to bridge the missing spatial information in monocular images, and predict a sole depth value for every object of interest. However, these assumptions do not always hold in practical applications. To tackle this problem, we propose a depth solving system that fully explores the visual clues from the subtasks in M3OD and generates multiple estimations for the depth of each target. Since the depth estimations rely on different assumptions in essence, they present diverse distributions. Even if some assumptions collapse, the estimations established on the remaining assumptions are still reliable. In addition, we develop a depth selection and combination strategy. This strategy is able to remove abnormal estimations caused by collapsed assumptions, and adaptively combine the remaining estimations into a single one. In this way, our depth solving system becomes more precise and robust. Exploiting the clues from multiple subtasks of M3OD and without introducing any extra information, our method surpasses the current best method by more than 20% relatively on the Moderate level of test split in the KITTI 3D object detection benchmark, while still maintaining real-time efficiency.

Cite

Text

Li et al. "Diversity Matters: Fully Exploiting Depth Clues for Reliable Monocular 3D Object Detection." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00281

Markdown

[Li et al. "Diversity Matters: Fully Exploiting Depth Clues for Reliable Monocular 3D Object Detection." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/li2022cvpr-diversity/) doi:10.1109/CVPR52688.2022.00281

BibTeX

@inproceedings{li2022cvpr-diversity,
  title     = {{Diversity Matters: Fully Exploiting Depth Clues for Reliable Monocular 3D Object Detection}},
  author    = {Li, Zhuoling and Qu, Zhan and Zhou, Yang and Liu, Jianzhuang and Wang, Haoqian and Jiang, Lihui},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {2791-2800},
  doi       = {10.1109/CVPR52688.2022.00281},
  url       = {https://mlanthology.org/cvpr/2022/li2022cvpr-diversity/}
}