Automatic Generation of Robot Program Code: Learning from Perceptual Data
Abstract
We propose a novel approach to program a robot by demonstrating the task multiple number of times in front of a vision system. Here we integrate human dexterity with sensory data using computer vision techniques in a single platform. A simultaneous feature detection and tracking framework is used to track various features (finger tips and the wrist joint). A Kalman filter does the tracking by predicting the tentative feature location and a HOS-based data clustering algorithm extracts the feature. Color information of the features are used for establishing correspondences. A fast, efficient and robust algorithm for the vision system thus developed process a binocular video sequence to obtain the trajectories and the orientation information of the end effector. The concept of a trajectory bundle is introduced to avoid singularities and to obtain an optimal path.
Cite
Text
Yeasin and Chaudhuri. "Automatic Generation of Robot Program Code: Learning from Perceptual Data." IEEE/CVF International Conference on Computer Vision, 1998. doi:10.1109/ICCV.1998.710822Markdown
[Yeasin and Chaudhuri. "Automatic Generation of Robot Program Code: Learning from Perceptual Data." IEEE/CVF International Conference on Computer Vision, 1998.](https://mlanthology.org/iccv/1998/yeasin1998iccv-automatic/) doi:10.1109/ICCV.1998.710822BibTeX
@inproceedings{yeasin1998iccv-automatic,
title = {{Automatic Generation of Robot Program Code: Learning from Perceptual Data}},
author = {Yeasin, Mohammed and Chaudhuri, Subhasis},
booktitle = {IEEE/CVF International Conference on Computer Vision},
year = {1998},
pages = {889-894},
doi = {10.1109/ICCV.1998.710822},
url = {https://mlanthology.org/iccv/1998/yeasin1998iccv-automatic/}
}