From Google Street View to 3D City Models

Abstract

We present a structure-from-motion (SfM) pipeline for visual 3D modeling of a large city area using 360° field of view Google Street View images. The core of the pipeline combines the state of the art techniques such as SURF feature detection, tentative matching by an approximate nearest neighbour search, relative camera motion estimation by solving 5-pt minimal camera pose problem, and sparse bundle adjustment. The robust and stable camera poses estimated by PROSAC with soft voting and by scale selection using a visual cone test bring high quality initial structure for bundle adjustment. Furthermore, searching for trajectory loops based on co-occurring visual words and closing them by adding new constraints for the bundle adjustment enforce the global consistency of camera poses and 3D structure in the sequence. We present a large-scale reconstruction computed from 4,799 images of the Google Street View Pittsburgh Research Data Set.

Cite

Text

Torii et al. "From Google Street View to 3D City Models." IEEE/CVF International Conference on Computer Vision Workshops, 2009. doi:10.1109/ICCVW.2009.5457551

Markdown

[Torii et al. "From Google Street View to 3D City Models." IEEE/CVF International Conference on Computer Vision Workshops, 2009.](https://mlanthology.org/iccvw/2009/torii2009iccvw-google/) doi:10.1109/ICCVW.2009.5457551

BibTeX

@inproceedings{torii2009iccvw-google,
  title     = {{From Google Street View to 3D City Models}},
  author    = {Torii, Akihiko and Havlena, Michal and Pajdla, Tomás},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2009},
  pages     = {2188-2195},
  doi       = {10.1109/ICCVW.2009.5457551},
  url       = {https://mlanthology.org/iccvw/2009/torii2009iccvw-google/}
}