Self-Organizing Visual Maps
Abstract
This paper deals with automatically learning the spatial distribution of a set of measurements: images, in the examples presented here. The solution to this problem can be viewed as an instance of robot mapping although it can also be used in other contexts. We examine the problem of organizing an ensemble of images of an environment in terms of the positions from which the images were obtained, and where only limited prior odometric information is available. Our approach employs a feature-based method derived from a probabilistic robot localization framework. Initially, a set of visual landmarks are selected from the images and correspondences are found across the ensemble. The images are then localized by first assembling the small subset of images for which odometric confidence is high, and sequentially inserting the remaining images, localizing each against the previous estimates, and taking advantage of any priors that are available. We present experimental results validating the approach, and demonstrating metrically and topologically accurate results over two large image ensembles, even given only four initial ground truth poses. Finally, we discuss the results, their relationship to the autonomous exploration of an unknown environment, and their utility for robot localization and navigation.
Cite
Text
Sim and Dudek. "Self-Organizing Visual Maps." AAAI Conference on Artificial Intelligence, 2004.Markdown
[Sim and Dudek. "Self-Organizing Visual Maps." AAAI Conference on Artificial Intelligence, 2004.](https://mlanthology.org/aaai/2004/sim2004aaai-self/)BibTeX
@inproceedings{sim2004aaai-self,
title = {{Self-Organizing Visual Maps}},
author = {Sim, Robert and Dudek, Gregory},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2004},
pages = {470-475},
url = {https://mlanthology.org/aaai/2004/sim2004aaai-self/}
}