Teaching Robots Through Situated Interactive Dialogue and Visual Demonstrations
Abstract
The ability to quickly adapt to new environments and incorporate new knowledge is of great importance for robots operating in unstructured environments and interacting with non-expert users. This paper reports on our current progress in tackling this problem. We propose the development of a framework for teaching robots to perform tasks using natural language instructions, visual demonstrations and interactive dialogue. Moreover, we present a module for learning objects incrementally and on-the-fly that would enable robots to ground referents in the natural language instructions and reason about the state of the world.
Cite
Text
Part and Lemon. "Teaching Robots Through Situated Interactive Dialogue and Visual Demonstrations." International Joint Conference on Artificial Intelligence, 2017. doi:10.24963/IJCAI.2017/760Markdown
[Part and Lemon. "Teaching Robots Through Situated Interactive Dialogue and Visual Demonstrations." International Joint Conference on Artificial Intelligence, 2017.](https://mlanthology.org/ijcai/2017/part2017ijcai-teaching/) doi:10.24963/IJCAI.2017/760BibTeX
@inproceedings{part2017ijcai-teaching,
title = {{Teaching Robots Through Situated Interactive Dialogue and Visual Demonstrations}},
author = {Part, Jose L. and Lemon, Oliver},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2017},
pages = {5201-5202},
doi = {10.24963/IJCAI.2017/760},
url = {https://mlanthology.org/ijcai/2017/part2017ijcai-teaching/}
}