RoboDepth: Robust Out-of-Distribution Depth Estimation Under Corruptions
Abstract
Depth estimation from monocular images is pivotal for real-world visual perception systems. While current learning-based depth estimation models train and test on meticulously curated data, they often overlook out-of-distribution (OoD) situations. Yet, in practical settings -- especially safety-critical ones like autonomous driving -- common corruptions can arise. Addressing this oversight, we introduce a comprehensive robustness test suite, RoboDepth, encompassing 18 corruptions spanning three categories: i) weather and lighting conditions; ii) sensor failures and movement; and iii) data processing anomalies. We subsequently benchmark 42 depth estimation models across indoor and outdoor scenes to assess their resilience to these corruptions. Our findings underscore that, in the absence of a dedicated robustness evaluation framework, many leading depth estimation models may be susceptible to typical corruptions. We delve into design considerations for crafting more robust depth estimation models, touching upon pre-training, augmentation, modality, model capacity, and learning paradigms. We anticipate our benchmark will establish a foundational platform for advancing robust OoD depth estimation.
Cite
Text
Kong et al. "RoboDepth: Robust Out-of-Distribution Depth Estimation Under Corruptions." Neural Information Processing Systems, 2023.Markdown
[Kong et al. "RoboDepth: Robust Out-of-Distribution Depth Estimation Under Corruptions." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/kong2023neurips-robodepth/)BibTeX
@inproceedings{kong2023neurips-robodepth,
title = {{RoboDepth: Robust Out-of-Distribution Depth Estimation Under Corruptions}},
author = {Kong, Lingdong and Xie, Shaoyuan and Hu, Hanjiang and Ng, Lai Xing and Cottereau, Benoit and Ooi, Wei Tsang},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/kong2023neurips-robodepth/}
}