Grounding Robot Plans from Natural Language Instructions with Incomplete World Knowledge
Abstract
Our goal is to enable robots to interpret and execute high-level tasks conveyed using natural language instructions. For example, consider tasking a household robot to, “prepare my breakfast”, “clear the boxes on the table” or “make me a fruit milkshake”. Interpreting such underspecified instructions requires environmental context and background knowledge about how to accomplish complex tasks. Further, the robot’s workspace knowledge may be incomplete: the environment may only be partially-observed or background knowledge may be missing causing a failure in plan synthesis. We introduce a probabilistic model that utilizes background knowledge to infer latent or missing plan constituents based on semantic co-associations learned from noisy textual corpora of task descriptions. The ability to infer missing plan constituents enables information-seeking actions such as visual exploration or dialogue with the human to acquire new knowledge to fill incomplete plans. Results indicate robust plan inference from under-specified instructions in partially-known worlds.
Cite
Text
Nyga et al. "Grounding Robot Plans from Natural Language Instructions with Incomplete World Knowledge." Proceedings of The 2nd Conference on Robot Learning, 2018.Markdown
[Nyga et al. "Grounding Robot Plans from Natural Language Instructions with Incomplete World Knowledge." Proceedings of The 2nd Conference on Robot Learning, 2018.](https://mlanthology.org/corl/2018/nyga2018corl-grounding/)BibTeX
@inproceedings{nyga2018corl-grounding,
title = {{Grounding Robot Plans from Natural Language Instructions with Incomplete World Knowledge}},
author = {Nyga, Daniel and Roy, Subhro and Paul, Rohan and Park, Daehyung and Pomarlan, Mihai and Beetz, Michael and Roy, Nicholas},
booktitle = {Proceedings of The 2nd Conference on Robot Learning},
year = {2018},
pages = {714-723},
volume = {87},
url = {https://mlanthology.org/corl/2018/nyga2018corl-grounding/}
}