Image to LIDAR Matching for Geotagging in Urban Environments

Abstract

We present a novel method for matching ground-based query images to a georeferenced LIDAR 3D dataset acquired from an airborne platform in urban environments. We are addressing two main technical challenges: (i) different modalities between the query and the reference data (electro-optical vs. LIDAR) that impose unique challenges to the matching problem; (ii) very different viewing directions from which the query, respectively the LIDAR data were acquired. We make two main technical contributions in this paper. First, we present a method for automatically extracting features from LIDAR data that largely remain invariant to the projection in a 2D image and thus allow robust matching across modalities and change in viewpoint. Second, we describe a matching technique that finds the best 3D pose that relates the query input image to a rendered image of the 3D models. We present results of matching images to high-resolution LIDAR data covering five square kilometers over a city that demonstrate the power of the matching method proposed.

Cite

Text

Matei et al. "Image to LIDAR Matching for Geotagging in Urban Environments." IEEE/CVF Winter Conference on Applications of Computer Vision, 2013. doi:10.1109/WACV.2013.6475048

Markdown

[Matei et al. "Image to LIDAR Matching for Geotagging in Urban Environments." IEEE/CVF Winter Conference on Applications of Computer Vision, 2013.](https://mlanthology.org/wacv/2013/matei2013wacv-image/) doi:10.1109/WACV.2013.6475048

BibTeX

@inproceedings{matei2013wacv-image,
  title     = {{Image to LIDAR Matching for Geotagging in Urban Environments}},
  author    = {Matei, Bogdan C. and Valk, Nick Vander and Zhu, Zhiwei and Cheng, Hui and Sawhney, Harpreet S.},
  booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
  year      = {2013},
  pages     = {413-420},
  doi       = {10.1109/WACV.2013.6475048},
  url       = {https://mlanthology.org/wacv/2013/matei2013wacv-image/}
}