Visually-Guided Navigation by Comparing Two-Dimensional Edge Images
Abstract
We present a method for navigating a robot from an initial position to a specified landmark in its visual field, using a sequence of monocular images. The location of the landmark with respect to the robot is determined using the change in size and location of the landmark in the image, as a function of the motion of the robot. The landmark location is estimated after the first three images are taken, and this estimate is refined as the robot moves. The method can correct for errors in the robot motion, as well as navigate around obstacles. The obstacle avoidance is done using bump sensors, sonar and dead reckoning, rather than visual servoing. The method does not require prior calibration of the camera. We show some examples of the operation of the system.
Cite
Text
Huttenlocher et al. "Visually-Guided Navigation by Comparing Two-Dimensional Edge Images." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1994. doi:10.1109/CVPR.1994.323910Markdown
[Huttenlocher et al. "Visually-Guided Navigation by Comparing Two-Dimensional Edge Images." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1994.](https://mlanthology.org/cvpr/1994/huttenlocher1994cvpr-visually/) doi:10.1109/CVPR.1994.323910BibTeX
@inproceedings{huttenlocher1994cvpr-visually,
title = {{Visually-Guided Navigation by Comparing Two-Dimensional Edge Images}},
author = {Huttenlocher, Daniel P. and Leventon, Michael E. and Rucklidge, William},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {1994},
pages = {842-847},
doi = {10.1109/CVPR.1994.323910},
url = {https://mlanthology.org/cvpr/1994/huttenlocher1994cvpr-visually/}
}