Language-Guided Semantic Mapping and Mobile Manipulation in Partially Observable Environments
Abstract
Recent advances in data-driven models for grounded language understanding have enabled robots to interpret increasingly complex instructions. Two fundamental limitations of these methods are that most require a full model of the environment to be known a priori, and they attempt to reason over a world representation that is flat and unnecessarily detailed, which limits scalability. Recent semantic mapping methods address partial observability by exploiting language as a sensor to infer a distribution over topological, metric and semantic properties of the environment. However, maintaining a distribution over highly detailed maps that can support grounding of diverse instructions is computationally expensive and hinders real-time human-robot collaboration. We propose a novel framework that learns to adapt perception according to the task in order to maintain compact distributions over semantic maps. Experiments with a mobile manipulator demonstrate more efficient instruction following in a priori unknown environments.
Cite
Text
Patki et al. "Language-Guided Semantic Mapping and Mobile Manipulation in Partially Observable Environments." Conference on Robot Learning, 2019.Markdown
[Patki et al. "Language-Guided Semantic Mapping and Mobile Manipulation in Partially Observable Environments." Conference on Robot Learning, 2019.](https://mlanthology.org/corl/2019/patki2019corl-languageguided/)BibTeX
@inproceedings{patki2019corl-languageguided,
title = {{Language-Guided Semantic Mapping and Mobile Manipulation in Partially Observable Environments}},
author = {Patki, Siddharth and Fahnestock, Ethan and Howard, Thomas M. and Walter, Matthew R.},
booktitle = {Conference on Robot Learning},
year = {2019},
pages = {1201-1210},
volume = {100},
url = {https://mlanthology.org/corl/2019/patki2019corl-languageguided/}
}