Task-Aware Monocular Depth Estimation for 3D Object Detection
Abstract
Monocular depth estimation enables 3D perception from a single 2D image, thus attracting much research attention for years. Almost all methods treat foreground and background regions (“things and stuff”) in an image equally. However, not all pixels are equal. Depth of foreground objects plays a crucial role in 3D object recognition and localization. To date how to boost the depth prediction accuracy of foreground objects is rarely discussed. In this paper, we first analyze the data distributions and interaction of foreground and background, then propose the foreground-background separated monocular depth estimation (ForeSeE) method, to estimate the foreground and background depth using separate optimization objectives and decoders. Our method significantly improves the depth estimation performance on foreground objects. Applying ForeSeE to 3D object detection, we achieve 7.5 AP gains and set new state-of-the-art results among other monocular methods. Code will be available at: https://github.com/WXinlong/ForeSeE.
Cite
Text
Wang et al. "Task-Aware Monocular Depth Estimation for 3D Object Detection." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I07.6908Markdown
[Wang et al. "Task-Aware Monocular Depth Estimation for 3D Object Detection." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/wang2020aaai-task/) doi:10.1609/AAAI.V34I07.6908BibTeX
@inproceedings{wang2020aaai-task,
title = {{Task-Aware Monocular Depth Estimation for 3D Object Detection}},
author = {Wang, Xinlong and Yin, Wei and Kong, Tao and Jiang, Yuning and Li, Lei and Shen, Chunhua},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2020},
pages = {12257-12264},
doi = {10.1609/AAAI.V34I07.6908},
url = {https://mlanthology.org/aaai/2020/wang2020aaai-task/}
}