SegStereo: Exploiting Semantic Information for Disparity Estimation
Abstract
Disparity estimation for binocular stereo images finds a wide range of applications. Traditional algorithms may fail on featureless regions, which could be handled by high-level clues such as semantic segments. In this paper, we suggest that appropriate incorporation of semantic cues can greatly rectify prediction in commonly-used disparity estimation frameworks. Our method conducts semantic feature embedding and regularizes semantic cues as the loss term to improve learning disparity. Our unified model SegStereo employs semantic features from segmentation and introduces semantic softmax loss, which helps improve the prediction accuracy of disparity maps. The semantic cues work well in both unsupervised and supervised manners. SegStereo achieves state-of-the-art results on KITTI Stereo benchmark and produces decent prediction on both CityScapes and FlyingThings3D datasets.
Cite
Text
Yang et al. "SegStereo: Exploiting Semantic Information for Disparity Estimation." Proceedings of the European Conference on Computer Vision (ECCV), 2018. doi:10.1007/978-3-030-01234-2_39Markdown
[Yang et al. "SegStereo: Exploiting Semantic Information for Disparity Estimation." Proceedings of the European Conference on Computer Vision (ECCV), 2018.](https://mlanthology.org/eccv/2018/yang2018eccv-segstereo/) doi:10.1007/978-3-030-01234-2_39BibTeX
@inproceedings{yang2018eccv-segstereo,
title = {{SegStereo: Exploiting Semantic Information for Disparity Estimation}},
author = {Yang, Guorun and Zhao, Hengshuang and Shi, Jianping and Deng, Zhidong and Jia, Jiaya},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2018},
doi = {10.1007/978-3-030-01234-2_39},
url = {https://mlanthology.org/eccv/2018/yang2018eccv-segstereo/}
}