A Vision-Based Gestural Guidance Interface for Mobile Robotic Platforms
Abstract
This paper describes a gestural guidance interface that controls the motion of a mobile platform using a set of predefined static and dynamic hand gestures inspired by the marshalling code. Images captured by an onboard color camera are processed at video rate in order to track the operator’s head and hands. The camera pan, tilt and zoom are adjusted by a fuzzy-logic controller so as to track the operator’s head and maintain it centered and properly sized within the image plane. Gestural commands are defined as two-hand motion patterns, whose features are provided, at video rate, to a trained neural network. A command is considered recognized once the classifier has produced a series of consistent interpretations. A motion-modifying command is then issued in a way that ensures motion coherence and smoothness. The guidance system can be trained online.
Cite
Text
Paquin and Cohen. "A Vision-Based Gestural Guidance Interface for Mobile Robotic Platforms." European Conference on Computer Vision, 2004. doi:10.1007/978-3-540-24837-8_5Markdown
[Paquin and Cohen. "A Vision-Based Gestural Guidance Interface for Mobile Robotic Platforms." European Conference on Computer Vision, 2004.](https://mlanthology.org/eccv/2004/paquin2004eccv-vision/) doi:10.1007/978-3-540-24837-8_5BibTeX
@inproceedings{paquin2004eccv-vision,
title = {{A Vision-Based Gestural Guidance Interface for Mobile Robotic Platforms}},
author = {Paquin, Vincent and Cohen, Paul},
booktitle = {European Conference on Computer Vision},
year = {2004},
pages = {39-47},
doi = {10.1007/978-3-540-24837-8_5},
url = {https://mlanthology.org/eccv/2004/paquin2004eccv-vision/}
}