Consistent Depth Maps Recovery from a Trinocular Video Sequence

Abstract

In this paper, we propose a novel dense depth recovery method for a trinocular video sequence. Specifically, we contribute a novel trinocular stereo matching model, which can effectively utilize the advantages of trinocular stereo images, and incorporate the visibility term with segmentation prior for robust depth estimate. In order to make the recovered depth maps more accurate and temporally consistent, we propose to first classify the pixels to static and dynamic ones, and then perform spatio-temporal depth optimization for them in different ways. Especially, we propose two motion models for handling dynamic pixels. The traditional bundle optimization model and our spatio-temporal optimization model are softly combined in a probabilistic way, so that the depths of both static and dynamic pixels can be effectively refined. Our automatic depth recovery approach is evaluated using a variety of challenging trinocular video sequences.

Cite

Text

Yang et al. "Consistent Depth Maps Recovery from a Trinocular Video Sequence." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2012. doi:10.1109/CVPR.2012.6247835

Markdown

[Yang et al. "Consistent Depth Maps Recovery from a Trinocular Video Sequence." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2012.](https://mlanthology.org/cvpr/2012/yang2012cvpr-consistent/) doi:10.1109/CVPR.2012.6247835

BibTeX

@inproceedings{yang2012cvpr-consistent,
  title     = {{Consistent Depth Maps Recovery from a Trinocular Video Sequence}},
  author    = {Yang, Wenzhuo and Zhang, Guofeng and Bao, Hujun and Kim, Jiwon and Lee, Ho-Young},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {2012},
  pages     = {1466-1473},
  doi       = {10.1109/CVPR.2012.6247835},
  url       = {https://mlanthology.org/cvpr/2012/yang2012cvpr-consistent/}
}