Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization
Abstract
In this paper we propose an affordable solution to selflocalization, which utilizes visual odometry and road maps as the only inputs. To this end, we present a probabilistic model as well as an efficient approximate inference algorithm, which is able to utilize distributed computation to meet the real-time requirements of autonomous systems. Because of the probabilistic nature of the model we are able to cope with uncertainty due to noisy visual odometry and inherent ambiguities in the map (e.g., in a Manhattan world). By exploiting freely available, community developed maps and visual odometry measurements, we are able to localize a vehicle up to 3m after only a few seconds of driving on maps which contain more than 2,150km of drivable roads.
Cite
Text
Brubaker et al. "Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization." Conference on Computer Vision and Pattern Recognition, 2013. doi:10.1109/CVPR.2013.393Markdown
[Brubaker et al. "Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization." Conference on Computer Vision and Pattern Recognition, 2013.](https://mlanthology.org/cvpr/2013/brubaker2013cvpr-lost/) doi:10.1109/CVPR.2013.393BibTeX
@inproceedings{brubaker2013cvpr-lost,
title = {{Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization}},
author = {Brubaker, Marcus A. and Geiger, Andreas and Urtasun, Raquel},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2013},
doi = {10.1109/CVPR.2013.393},
url = {https://mlanthology.org/cvpr/2013/brubaker2013cvpr-lost/}
}