DPS-Net: Deep Polarimetric Stereo Depth Estimation
Abstract
Stereo depth estimation usually struggles to deal with textureless scenes for both traditional and learning-based methods due to the inherent dependence on image correspondence matching. In this paper, we propose a novel neural network, i.e., DPS-Net, to exploit both the prior geometric knowledge and polarimetric information for depth estimation with two polarimetric stereo images. Specifically, we construct both RGB and polarization correlation volumes to fully leverage the multi-domain similarity between polarimetric stereo images. Since inherent ambiguities exist in the polarization images, we introduce the iso-depth cost explicitly into the network to solve these ambiguities. Moreover, we design a cascaded dual-GRU architecture to recurrently update the disparity and effectively fuse both the multi-domain correlation features and the iso-depth cost. Besides, we present new synthetic and real polarimetric stereo datasets for evaluation. Experimental results demonstrate that our method outperforms the state-of-the-art stereo depth estimation methods.
Cite
Text
Tian et al. "DPS-Net: Deep Polarimetric Stereo Depth Estimation." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.00330Markdown
[Tian et al. "DPS-Net: Deep Polarimetric Stereo Depth Estimation." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/tian2023iccv-dpsnet/) doi:10.1109/ICCV51070.2023.00330BibTeX
@inproceedings{tian2023iccv-dpsnet,
title = {{DPS-Net: Deep Polarimetric Stereo Depth Estimation}},
author = {Tian, Chaoran and Pan, Weihong and Wang, Zimo and Mao, Mao and Zhang, Guofeng and Bao, Hujun and Tan, Ping and Cui, Zhaopeng},
booktitle = {International Conference on Computer Vision},
year = {2023},
pages = {3569-3579},
doi = {10.1109/ICCV51070.2023.00330},
url = {https://mlanthology.org/iccv/2023/tian2023iccv-dpsnet/}
}