VideoTrek: A Vision System for a Tag-Along Robot
Abstract
We present a system that combines multiple visual navigation techniques to achieve GPS-denied, non-line-of-sight SLAM capability for heterogeneous platforms. Our approach builds on several layers of vision algorithms, including sparse frame-to-frame structure from motion (visual odometry), a Kalman filter for fusion with inertial measurement unit (IMU) data and a distributed visual landmark matching capability with geometric consistency verification. We apply these techniques to implement a tag-along robot, where a human operator leads the way and a robot autonomously follows. We show results for a real-time implementation of such a system with real field constraints on CPU power and network resources.
Cite
Text
Naroditsky et al. "VideoTrek: A Vision System for a Tag-Along Robot." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2009. doi:10.1109/CVPR.2009.5206696Markdown
[Naroditsky et al. "VideoTrek: A Vision System for a Tag-Along Robot." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2009.](https://mlanthology.org/cvpr/2009/naroditsky2009cvpr-videotrek/) doi:10.1109/CVPR.2009.5206696BibTeX
@inproceedings{naroditsky2009cvpr-videotrek,
title = {{VideoTrek: A Vision System for a Tag-Along Robot}},
author = {Naroditsky, Oleg and Zhu, Zhiwei and Das, Aveek and Samarasekera, Supun and Oskiper, Taragay and Kumar, Rakesh},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2009},
pages = {1101-1108},
doi = {10.1109/CVPR.2009.5206696},
url = {https://mlanthology.org/cvpr/2009/naroditsky2009cvpr-videotrek/}
}