Multi-View Image and ToF Sensor Fusion for Dense 3D Reconstruction

Abstract

Multi-view stereo methods frequently fail to properly reconstruct 3D scene geometry if visible texture is sparse or the scene exhibits difficult self-occlusions. Time-of-Flight (ToF) depth sensors can provide 3D information regardless of texture but with only limited resolution and accuracy. To find an optimal reconstruction, we propose an integrated multi-view sensor fusion approach that combines information from multiple color cameras and multiple ToF depth sensors. First, multi-view ToF sensor measurements are combined to obtain a coarse but complete model. Then, the initial model is refined by means of a probabilistic multi-view fusion framework, optimizing over an energy function that aggregates ToF depth sensor information with multi-view stereo and silhouette constraints. We obtain high quality dense and detailed 3D models of scenes challenging for stereo alone, while simultaneously reducing complex noise of ToF sensors.

Cite

Text

Kim et al. "Multi-View Image and ToF Sensor Fusion for Dense 3D Reconstruction." IEEE/CVF International Conference on Computer Vision Workshops, 2009. doi:10.1109/ICCVW.2009.5457430

Markdown

[Kim et al. "Multi-View Image and ToF Sensor Fusion for Dense 3D Reconstruction." IEEE/CVF International Conference on Computer Vision Workshops, 2009.](https://mlanthology.org/iccvw/2009/kim2009iccvw-multiview/) doi:10.1109/ICCVW.2009.5457430

BibTeX

@inproceedings{kim2009iccvw-multiview,
  title     = {{Multi-View Image and ToF Sensor Fusion for Dense 3D Reconstruction}},
  author    = {Kim, Young Min and Theobalt, Christian and Diebel, James and Kosecka, Jana and Micusík, Branislav and Thrun, Sebastian},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2009},
  pages     = {1542-1549},
  doi       = {10.1109/ICCVW.2009.5457430},
  url       = {https://mlanthology.org/iccvw/2009/kim2009iccvw-multiview/}
}