Learning from Unscripted Deictic Gesture and Language for Human-Robot Interactions
Abstract
As robots become more ubiquitous, it is increasingly important for untrained users to be able to interact with them intuitively. In this work, we investigate how people refer to objects in the world during relatively unstructured communication with robots. We collect a corpus of deictic interactions from users describing objects, which we use to train language and gesture models that allow our robot to determine what objects are being indicated. We introduce a temporal extension to state-of-the-art hierarchical matching pursuit features to support gesture understanding, and demonstrate that combining multiple communication modalities more effectively captures user intent than relying on a single type of input. Finally, we present initial interactions with a robot that uses the learned models to follow commands while continuing to learn from user input.
Cite
Text
Matuszek et al. "Learning from Unscripted Deictic Gesture and Language for Human-Robot Interactions." AAAI Conference on Artificial Intelligence, 2014. doi:10.1609/AAAI.V28I1.9051Markdown
[Matuszek et al. "Learning from Unscripted Deictic Gesture and Language for Human-Robot Interactions." AAAI Conference on Artificial Intelligence, 2014.](https://mlanthology.org/aaai/2014/matuszek2014aaai-learning/) doi:10.1609/AAAI.V28I1.9051BibTeX
@inproceedings{matuszek2014aaai-learning,
title = {{Learning from Unscripted Deictic Gesture and Language for Human-Robot Interactions}},
author = {Matuszek, Cynthia and Bo, Liefeng and Zettlemoyer, Luke and Fox, Dieter},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2014},
pages = {2556-2563},
doi = {10.1609/AAAI.V28I1.9051},
url = {https://mlanthology.org/aaai/2014/matuszek2014aaai-learning/}
}