Worldwide Pose Estimation Using 3D Point Clouds

Abstract

We address the problem of determining where a photo was taken by estimating a full 6-DOF-plus-intrincs camera pose with respect to a large geo-registered 3D point cloud, bringing together research on image localization, landmark recognition, and 3D pose estimation. Our method scales to datasets with hundreds of thousands of images and tens of millions of 3D points through the use of two new techniques: a co-occurrence prior for RANSAC and bidirectional matching of image features with 3D points. We evaluate our method on several large data sets, and show state-of-the-art results on landmark recognition as well as the ability to locate cameras to within meters, requiring only seconds per query.

Cite

Text

Li et al. "Worldwide Pose Estimation Using 3D Point Clouds." European Conference on Computer Vision, 2012. doi:10.1007/978-3-642-33718-5_2

Markdown

[Li et al. "Worldwide Pose Estimation Using 3D Point Clouds." European Conference on Computer Vision, 2012.](https://mlanthology.org/eccv/2012/li2012eccv-worldwide/) doi:10.1007/978-3-642-33718-5_2

BibTeX

@inproceedings{li2012eccv-worldwide,
  title     = {{Worldwide Pose Estimation Using 3D Point Clouds}},
  author    = {Li, Yunpeng and Snavely, Noah and Huttenlocher, Dan and Fua, Pascal},
  booktitle = {European Conference on Computer Vision},
  year      = {2012},
  pages     = {15-29},
  doi       = {10.1007/978-3-642-33718-5_2},
  url       = {https://mlanthology.org/eccv/2012/li2012eccv-worldwide/}
}