Localization in Urban Environments: Monocular Vision Compared to a Differential GPS Sensor
Abstract
In this paper we present a method for computing the localization of a mobile robot with reference to a learning video sequence. The robot is first guided on a path by a human, while the camera records a monocular learning sequence. Then a 3D reconstruction of the path and the environment is computed off line from the learning sequence. The 3D reconstruction is then used for computing the pose of the robot in real time (30 Hz) in autonomous navigation. Results from our localization method are compared to the ground truth measured with a differential GPS.
Cite
Text
Royer et al. "Localization in Urban Environments: Monocular Vision Compared to a Differential GPS Sensor." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2005. doi:10.1109/CVPR.2005.217Markdown
[Royer et al. "Localization in Urban Environments: Monocular Vision Compared to a Differential GPS Sensor." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2005.](https://mlanthology.org/cvpr/2005/royer2005cvpr-localization/) doi:10.1109/CVPR.2005.217BibTeX
@inproceedings{royer2005cvpr-localization,
title = {{Localization in Urban Environments: Monocular Vision Compared to a Differential GPS Sensor}},
author = {Royer, Eric and Lhuillier, Maxime and Dhome, Michel and Chateau, Thierry},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2005},
pages = {114-121},
doi = {10.1109/CVPR.2005.217},
url = {https://mlanthology.org/cvpr/2005/royer2005cvpr-localization/}
}