RA-Depth: Resolution Adaptive Self-Supervised Monocular Depth Estimation
Abstract
Existing self-supervised monocular depth estimation methods can get rid of expensive annotations and achieve promising results. However, these methods suffer from severe performance degradation when directly adopting a model trained on a fixed resolution to evaluate at other different resolutions. In this paper, we propose a resolution adaptive self-supervised monocular depth estimation method (RA-Depth) by learning the scale invariance of the scene depth. Specifically, we propose a simple yet efficient data augmentation method to generate images with arbitrary scales for the same scene. Then, we develop a dual high-resolution network that uses the multi-path encoder and decoder with dense interactions to aggregate multi-scale features for accurate depth inference. Finally, to explicitly learn the scale invariance of the scene depth, we formulate a cross-scale depth consistency loss on depth predictions with different scales. Extensive experiments on the KITTI, Make3D and NYU-V2 datasets demonstrate that RA-Depth not only achieves state-of-the-art performance, but also exhibits a good ability of resolution adaptation.
Cite
Text
He et al. "RA-Depth: Resolution Adaptive Self-Supervised Monocular Depth Estimation." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19812-0_33Markdown
[He et al. "RA-Depth: Resolution Adaptive Self-Supervised Monocular Depth Estimation." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/he2022eccv-radepth/) doi:10.1007/978-3-031-19812-0_33BibTeX
@inproceedings{he2022eccv-radepth,
title = {{RA-Depth: Resolution Adaptive Self-Supervised Monocular Depth Estimation}},
author = {He, Mu and Hui, Le and Bian, Yikai and Ren, Jian and Xie, Jin and Yang, Jian},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-19812-0_33},
url = {https://mlanthology.org/eccv/2022/he2022eccv-radepth/}
}