Dynamic Depth Recovery from Multiple Synchronized Video Streams
Abstract
This paper addresses the problem of extracting depth information of non-rigid dynamic 3D scenes from multiple synchronized video streams. Three main issues are discussed in this context. (i) temporally consistent depth estimation, (ii) sharp depth discontinuity estimation around object boundaries, and (iii) enforcement of the global visibility constraint. We present a framework in which the scene is modeled as a collection of 3D piecewise planar surface patches induced by color based image segmentation. This representation is continuously estimated using an incremental formulation in which the 3D geometric, motion, and global visibility constraints are enforced over space and time. The proposed algorithm optimizes a cost function that incorporates the spatial color consistency constraint and a smooth scene motion model.
Cite
Text
Tao et al. "Dynamic Depth Recovery from Multiple Synchronized Video Streams." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2001. doi:10.1109/CVPR.2001.990464Markdown
[Tao et al. "Dynamic Depth Recovery from Multiple Synchronized Video Streams." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2001.](https://mlanthology.org/cvpr/2001/tao2001cvpr-dynamic/) doi:10.1109/CVPR.2001.990464BibTeX
@inproceedings{tao2001cvpr-dynamic,
title = {{Dynamic Depth Recovery from Multiple Synchronized Video Streams}},
author = {Tao, Hai and Sawhney, Harpreet S. and Kumar, Rakesh},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2001},
pages = {I:118-124},
doi = {10.1109/CVPR.2001.990464},
url = {https://mlanthology.org/cvpr/2001/tao2001cvpr-dynamic/}
}