Integrating Vision for Human-Robot Interaction
Abstract
Human-robot interaction necessitates more than robust people detection and tracking. It relies on the integration of disparate scene information from tracking and recognition systems combined and infused with current and prior knowledge to facililtate robotic understanding and interaction with humans and the environment. In this work we will discuss our efforts in the development and integration of visual scene processing systems for the purpose of enhancing human robotic interaction. Our latest efforts in integrating 3D scene information for the production of novel information sources will be discussed and demonstrated. We show the integration of facial pose and pointing gestures to localize the diectic gesture to a single point in space. Additionally, we will discuss our efforts in integrating Markov logic networks for high level reasoning with computer vision systems to facilitate scene understanding.
Cite
Text
Fransen et al. "Integrating Vision for Human-Robot Interaction." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2010. doi:10.1109/CVPRW.2010.5543749Markdown
[Fransen et al. "Integrating Vision for Human-Robot Interaction." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2010.](https://mlanthology.org/cvprw/2010/fransen2010cvprw-integrating/) doi:10.1109/CVPRW.2010.5543749BibTeX
@inproceedings{fransen2010cvprw-integrating,
title = {{Integrating Vision for Human-Robot Interaction}},
author = {Fransen, Benjamin R. and Lawson, Wallace E. and Bugajska, Magdalena D.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2010},
pages = {9-16},
doi = {10.1109/CVPRW.2010.5543749},
url = {https://mlanthology.org/cvprw/2010/fransen2010cvprw-integrating/}
}