Large-Scale Multi-Resolution Surface Reconstruction from RGB-D Sequences

Abstract

We propose a method to generate highly detailed, textured 3D models of large environments from RGB-D sequences. Our system runs in real-time on a standard desktop PC with a state-of-the-art graphics card. To reduce the memory consumption, we fuse the acquired depth maps and colors in a multi-scale octree representation of a signed distance function. To estimate the camera poses, we construct a pose graph and use dense image alignment to determine the relative pose between pairs of frames. We add edges between nodes when we detect loop-closures and optimize the pose graph to correct for long-term drift. Our implementation is highly parallelized on graphics hardware to achieve real-time performance. More specifically, we can reconstruct, store, and continuously update a colored 3D model of an entire corridor of nine rooms at high levels of detail in real-time on a single GPU with 2.5GB.

Cite

Text

Steinbrucker et al. "Large-Scale Multi-Resolution Surface Reconstruction from RGB-D Sequences." International Conference on Computer Vision, 2013. doi:10.1109/ICCV.2013.405

Markdown

[Steinbrucker et al. "Large-Scale Multi-Resolution Surface Reconstruction from RGB-D Sequences." International Conference on Computer Vision, 2013.](https://mlanthology.org/iccv/2013/steinbrucker2013iccv-largescale/) doi:10.1109/ICCV.2013.405

BibTeX

@inproceedings{steinbrucker2013iccv-largescale,
  title     = {{Large-Scale Multi-Resolution Surface Reconstruction from RGB-D Sequences}},
  author    = {Steinbrucker, Frank and Kerl, Christian and Cremers, Daniel},
  booktitle = {International Conference on Computer Vision},
  year      = {2013},
  doi       = {10.1109/ICCV.2013.405},
  url       = {https://mlanthology.org/iccv/2013/steinbrucker2013iccv-largescale/}
}