Monocular Dense 3D Reconstruction of a Complex Dynamic Scene from Two Perspective Frames

Abstract

This paper proposes a new approach for monocular dense 3D reconstruction of a complex dynamic scene from two perspective frames. By applying superpixel oversegmentation to the image, we model a generically dynamic (hence non-rigid) scene with a piecewise planar and rigid approximation. In this way, we reduce the dynamic reconstruction problem to a "3D jigsaw puzzle" problem which takes pieces from an unorganized "soup of superpixels". We show that our method provides an effective solution to the inherent relative scale ambiguity in structure-from-motion. Since our method does not assume a template prior, or per-object segmentation, or knowledge about the rigidity of the dynamic scene, it is applicable to a wide range of scenarios. Extensive experiments on both synthetic and real monocular sequences demonstrate the superiority of our method compared with the state-of-the-art methods.

Cite

Text

Kumar et al. "Monocular Dense 3D Reconstruction of a Complex Dynamic Scene from Two Perspective Frames." International Conference on Computer Vision, 2017. doi:10.1109/ICCV.2017.498

Markdown

[Kumar et al. "Monocular Dense 3D Reconstruction of a Complex Dynamic Scene from Two Perspective Frames." International Conference on Computer Vision, 2017.](https://mlanthology.org/iccv/2017/kumar2017iccv-monocular/) doi:10.1109/ICCV.2017.498

BibTeX

@inproceedings{kumar2017iccv-monocular,
  title     = {{Monocular Dense 3D Reconstruction of a Complex Dynamic Scene from Two Perspective Frames}},
  author    = {Kumar, Suryansh and Dai, Yuchao and Li, Hongdong},
  booktitle = {International Conference on Computer Vision},
  year      = {2017},
  doi       = {10.1109/ICCV.2017.498},
  url       = {https://mlanthology.org/iccv/2017/kumar2017iccv-monocular/}
}