Repetition-Based Dense Single-View Reconstruction

Abstract

This paper presents a novel approach for dense reconstruction from a single-view of a repetitive scene structure. Given an image and its detected repetition regions, we model the shape recovery as the dense pixel correspondences within a single image. The correspondences are represented by an interval map that tells the distance of each pixel to its matched pixels within the single image. In order to obtain dense repetitive structures, we develop a new repetition constraint that penalizes the inconsistency between the repetition intervals of the dynamically corresponding pixel pairs. We deploy a graph-cut to balance between the high-level constraint of geometric repetition and the low-level constraints of photometric consistency and spatial smoothness. We demonstrate the accurate reconstruction of dense 3D repetitive structures through a variety of experiments, which prove the robustness of our approach to outliers such as structure variations, illumination changes, and occlusions.

Cite

Text

Wu et al. "Repetition-Based Dense Single-View Reconstruction." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2011. doi:10.1109/CVPR.2011.5995551

Markdown

[Wu et al. "Repetition-Based Dense Single-View Reconstruction." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2011.](https://mlanthology.org/cvpr/2011/wu2011cvpr-repetition/) doi:10.1109/CVPR.2011.5995551

BibTeX

@inproceedings{wu2011cvpr-repetition,
  title     = {{Repetition-Based Dense Single-View Reconstruction}},
  author    = {Wu, Changchang and Frahm, Jan-Michael and Pollefeys, Marc},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {2011},
  pages     = {3113-3120},
  doi       = {10.1109/CVPR.2011.5995551},
  url       = {https://mlanthology.org/cvpr/2011/wu2011cvpr-repetition/}
}