Multi-Layer Depth and Epipolar Feature Transformers for 3D Scene Reconstruction

Abstract

We tackle the problem of automatically reconstructing a complete 3D model of a scene from a single RGB image. This challenging task requires inferring the shape of both visible and occluded surfaces. Our approach utilizes viewer-centered, multi-layer representation of scene geometry adapted from recent methods for single object shape completion. To improve the accuracy of view-centered representations for complex scenes, we introduce a novel "Epipolar Feature Transformer" that transfers convolutional network features from an input view to other virtual camera viewpoints, and thus better covers the 3D scene geometry. Unlike existing approaches that first detect and localize objects in 3D, and then infer object shape using category-specific models, our approach is fully convolutional, end-to-end differentiable, and avoids the resolution and memory limitations of voxel representations. We demonstrate the advantages of multi-layer depth representations and epipolar feature transformers on the reconstruction of a large database of indoor scenes.

Cite

Text

Shin et al. "Multi-Layer Depth and Epipolar Feature Transformers for 3D Scene Reconstruction." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.

Markdown

[Shin et al. "Multi-Layer Depth and Epipolar Feature Transformers for 3D Scene Reconstruction." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.](https://mlanthology.org/cvprw/2019/shin2019cvprw-multilayer/)

BibTeX

@inproceedings{shin2019cvprw-multilayer,
  title     = {{Multi-Layer Depth and Epipolar Feature Transformers for 3D Scene Reconstruction}},
  author    = {Shin, Daeyun and Ren, Zhile and Sudderth, Erik B. and Fowlkes, Charless C.},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2019},
  pages     = {39-43},
  url       = {https://mlanthology.org/cvprw/2019/shin2019cvprw-multilayer/}
}