Minimal Scene Descriptions from Structure from Motion Models

Abstract

How much data do we need to describe a location? We explore this question in the context of 3D scene reconstructions created from running structure from motion on large Internet photo collections, where reconstructions can contain many millions of 3D points. We consider several methods for computing much more compact representations of such reconstructions for the task of location recognition, with the goal of maintaining good performance with very small models. In particular, we introduce a new method for computing compact models that takes into account both image-point relationships and feature distinctiveness, and we show that this method produces small models that yield better recognition performance than previous model reduction techniques.

Cite

Text

Cao and Snavely. "Minimal Scene Descriptions from Structure from Motion Models." Conference on Computer Vision and Pattern Recognition, 2014. doi:10.1109/CVPR.2014.66

Markdown

[Cao and Snavely. "Minimal Scene Descriptions from Structure from Motion Models." Conference on Computer Vision and Pattern Recognition, 2014.](https://mlanthology.org/cvpr/2014/cao2014cvpr-minimal/) doi:10.1109/CVPR.2014.66

BibTeX

@inproceedings{cao2014cvpr-minimal,
  title     = {{Minimal Scene Descriptions from Structure from Motion Models}},
  author    = {Cao, Song and Snavely, Noah},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2014},
  doi       = {10.1109/CVPR.2014.66},
  url       = {https://mlanthology.org/cvpr/2014/cao2014cvpr-minimal/}
}