A Persistent Spatial Semantic Representation for High-Level Natural Language Instruction Execution
Abstract
Natural language provides an accessible and expressive interface to specify long-term tasks for robotic agents. However, non-experts are likely to specify such tasks with high-level instructions, which abstract over specific robot actions through several layers of abstraction. We propose that key to bridging this gap between language and robot actions over long execution horizons are persistent representations. We propose a persistent spatial semantic representation method, and show how it enables building an agent that performs hierarchical reasoning to effectively execute long-term tasks. We evaluate our approach on the ALFRED benchmark and achieve state-of-the-art results, despite completely avoiding the commonly used step-by-step instructions. https://hlsm-alfred.github.io/
Cite
Text
Blukis et al. "A Persistent Spatial Semantic Representation for High-Level Natural Language Instruction Execution." Conference on Robot Learning, 2021.Markdown
[Blukis et al. "A Persistent Spatial Semantic Representation for High-Level Natural Language Instruction Execution." Conference on Robot Learning, 2021.](https://mlanthology.org/corl/2021/blukis2021corl-persistent/)BibTeX
@inproceedings{blukis2021corl-persistent,
title = {{A Persistent Spatial Semantic Representation for High-Level Natural Language Instruction Execution}},
author = {Blukis, Valts and Paxton, Chris and Fox, Dieter and Garg, Animesh and Artzi, Yoav},
booktitle = {Conference on Robot Learning},
year = {2021},
pages = {706-717},
volume = {164},
url = {https://mlanthology.org/corl/2021/blukis2021corl-persistent/}
}