A Deep Visual Correspondence Embedding Model for Stereo Matching Costs
Abstract
This paper presents a data-driven matching cost for stereo matching. A novel deep visual correspondence embedding model is trained via Convolutional Neural Network on a large set of stereo images with ground truth disparities. This deep embedding model leverages appearance data to learn visual similarity relationships between corresponding image patches, and explicitly maps intensity values into an embedding feature space to measure pixel dissimilarities. Experimental results on KITTI and Middlebury data sets demonstrate the effectiveness of our model. First, we prove that the new measure of pixel dissimilarity outperforms traditional matching costs. Furthermore, when integrated with a global stereo framework, our method ranks top 3 among all two-frame algorithms on the KITTI benchmark. Finally, cross-validation results show that our model is able to make correct predictions for unseen data which are outside of its labeled training set.
Cite
Text
Chen et al. "A Deep Visual Correspondence Embedding Model for Stereo Matching Costs." International Conference on Computer Vision, 2015. doi:10.1109/ICCV.2015.117Markdown
[Chen et al. "A Deep Visual Correspondence Embedding Model for Stereo Matching Costs." International Conference on Computer Vision, 2015.](https://mlanthology.org/iccv/2015/chen2015iccv-deep/) doi:10.1109/ICCV.2015.117BibTeX
@inproceedings{chen2015iccv-deep,
title = {{A Deep Visual Correspondence Embedding Model for Stereo Matching Costs}},
author = {Chen, Zhuoyuan and Sun, Xun and Wang, Liang and Yu, Yinan and Huang, Chang},
booktitle = {International Conference on Computer Vision},
year = {2015},
doi = {10.1109/ICCV.2015.117},
url = {https://mlanthology.org/iccv/2015/chen2015iccv-deep/}
}