On the Location Dependence of Convolutional Neural Network Features

Abstract

As the availability of geotagged imagery has increased, so has the interest in geolocation-related computer vision applications, ranging from wide-area image geolocalization to the extraction of environmental data from social network imagery. Encouraged by the recent success of deep convolutional networks for learning high-level features, we investigate the usefulness of deep learned features for such problems. We compare features extracted from various layers of convolutional neural networks and analyze their discriminative ability with regards to location. Our analysis spans several problem settings, including region identification, visualizing land cover in aerial imagery, and ground-image localization in regions without ground-image reference data (where we achieve state-of-the-art performance on a benchmark dataset). We present results on multiple datasets, including a new dataset we introduce containing hundreds of thousands of ground-level and aerial images in a large region centered around San Francisco.

Cite

Text

Workman and Jacobs. "On the Location Dependence of Convolutional Neural Network Features." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2015. doi:10.1109/CVPRW.2015.7301385

Markdown

[Workman and Jacobs. "On the Location Dependence of Convolutional Neural Network Features." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2015.](https://mlanthology.org/cvprw/2015/workman2015cvprw-location/) doi:10.1109/CVPRW.2015.7301385

BibTeX

@inproceedings{workman2015cvprw-location,
  title     = {{On the Location Dependence of Convolutional Neural Network Features}},
  author    = {Workman, Scott and Jacobs, Nathan},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2015},
  pages     = {70-78},
  doi       = {10.1109/CVPRW.2015.7301385},
  url       = {https://mlanthology.org/cvprw/2015/workman2015cvprw-location/}
}