Benchmarking Bird's Eye View Detection Robustness to Real-World Corruptions
Abstract
The recent advent of camera-based bird's eye view (BEV) detection algorithms exhibits great potential for in-vehicle 3D object detection. Despite the progressively achieved results on the standard benchmark, the robustness of BEV detectors has not been thoroughly examined, which is critical for safe operations. To fill in this gap, we introduce nuScenes-C, a test suite that encompasses eight distinct corruptions with a high likelihood to occur in real-world applications, including Bright, Dark, Fog, Snow, Motion Blur, Color Quant, Camera Crash, and Frame Lost. Based on nuScenes-C, we extensively evaluate a wide range of BEV detection models to understand their resilience and reliability. Our findings indicate a strong correlation between the absolute performance on in-distribution and out-of-distribution datasets. Nonetheless, there is considerable variation in relative performance across different approaches. Our experiments further demonstrate that pre-training and depth-free BEV transformation have the potential to enhance out-of-distribution robustness. The benchmark is openly accessible at https://github.com/Daniel-xsy/RoboBEV.
Cite
Text
Xie et al. "Benchmarking Bird's Eye View Detection Robustness to Real-World Corruptions." ICLR 2023 Workshops: SR4AD, 2023.Markdown
[Xie et al. "Benchmarking Bird's Eye View Detection Robustness to Real-World Corruptions." ICLR 2023 Workshops: SR4AD, 2023.](https://mlanthology.org/iclrw/2023/xie2023iclrw-benchmarking/)BibTeX
@inproceedings{xie2023iclrw-benchmarking,
title = {{Benchmarking Bird's Eye View Detection Robustness to Real-World Corruptions}},
author = {Xie, Shaoyuan and Kong, Lingdong and Zhang, Wenwei and Ren, Jiawei and Pan, Liang and Chen, Kai and Liu, Ziwei},
booktitle = {ICLR 2023 Workshops: SR4AD},
year = {2023},
url = {https://mlanthology.org/iclrw/2023/xie2023iclrw-benchmarking/}
}