Building Reconstruction Using Manhattan-World Grammars

Abstract

We present a passive computer vision method that exploits existing mapping and navigation databases in order to automatically create 3D building models. Our method defines a grammar for representing changes in building geometry that approximately follow the Manhattan-world assumption which states there is a predominance of three mutually orthogonal directions in the scene. By using multiple calibrated aerial images, we extend previous Manhattan-world methods to robustly produce a single, coherent, complete geometric model of a building with partial textures. Our method uses an optimization to discover a 3D building geometry that produces the same set of façade orientation changes observed in the captured images. We have applied our method to several real-world buildings and have analyzed our approach using synthetic buildings.

Cite

Text

Vanegas et al. "Building Reconstruction Using Manhattan-World Grammars." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2010. doi:10.1109/CVPR.2010.5540190

Markdown

[Vanegas et al. "Building Reconstruction Using Manhattan-World Grammars." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2010.](https://mlanthology.org/cvpr/2010/vanegas2010cvpr-building/) doi:10.1109/CVPR.2010.5540190

BibTeX

@inproceedings{vanegas2010cvpr-building,
  title     = {{Building Reconstruction Using Manhattan-World Grammars}},
  author    = {Vanegas, Carlos A. and Aliaga, Daniel G. and Benes, Bedrich},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {2010},
  pages     = {358-365},
  doi       = {10.1109/CVPR.2010.5540190},
  url       = {https://mlanthology.org/cvpr/2010/vanegas2010cvpr-building/}
}