MODE: Multi-View Omnidirectional Depth Estimation with 360° Cameras
Abstract
In this paper, we propose a two-stage omnidirectional depth estimation framework with multi-view 360-degree cameras. The framework first estimates the depth maps from different camera pairs via omnidirectional stereo matching and then fuses the depth maps to achieve robustness against mud spots, water drops on camera lenses, and glare caused by intense light. We adopt spherical feature learning to address the distortion of panoramas. In addition, a synthetic 360-degree dataset consisting of 12K road scene panoramas and 3K ground truth depth maps is presented to train and evaluate 360-degree depth estimation algorithms. Our dataset takes soiled camera lenses and glare into consideration, which is more consistent with the real-world environment. Experimental results show that the proposed framework generates reliable results in both synthetic and real-world environments, and it achieves state-of-the-art performance on different datasets. The code and data are available at https://github.com/nju-ee/MODE-2022
Cite
Text
Li et al. "MODE: Multi-View Omnidirectional Depth Estimation with 360° Cameras." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19827-4_12Markdown
[Li et al. "MODE: Multi-View Omnidirectional Depth Estimation with 360° Cameras." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/li2022eccv-mode/) doi:10.1007/978-3-031-19827-4_12BibTeX
@inproceedings{li2022eccv-mode,
title = {{MODE: Multi-View Omnidirectional Depth Estimation with 360° Cameras}},
author = {Li, Ming and Jin, Xueqian and Hu, Xuejiao and Dai, Jingzhao and Du, Sidan and Li, Yang},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-19827-4_12},
url = {https://mlanthology.org/eccv/2022/li2022eccv-mode/}
}