Mediating Between Qualitative and Quantitative Representations for Task-Orientated Human-Robot Interaction
Abstract
In human-robot interaction (HRI) it is essential that the robot interprets and reacts to a human's utterances in a manner that reflects their intended meaning. In this paper we present a collection of novel techniques that allow a robot to interpret and execute spoken commands describing manipulation goals involving qualitative spatial constraints (e.g. "put the red ball near the blue cube"). The resulting implemented system integrates computer vision, potential field models of spatial relationships, and action planning to mediate between the continuous real world, and discrete, qualitative representations used for symbolic reasoning.
Cite
Text
Brenner et al. "Mediating Between Qualitative and Quantitative Representations for Task-Orientated Human-Robot Interaction." International Joint Conference on Artificial Intelligence, 2007.Markdown
[Brenner et al. "Mediating Between Qualitative and Quantitative Representations for Task-Orientated Human-Robot Interaction." International Joint Conference on Artificial Intelligence, 2007.](https://mlanthology.org/ijcai/2007/brenner2007ijcai-mediating/)BibTeX
@inproceedings{brenner2007ijcai-mediating,
title = {{Mediating Between Qualitative and Quantitative Representations for Task-Orientated Human-Robot Interaction}},
author = {Brenner, Michael and Hawes, Nick and Kelleher, John D. and Wyatt, Jeremy L.},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2007},
pages = {2072-2077},
url = {https://mlanthology.org/ijcai/2007/brenner2007ijcai-mediating/}
}