Language-Conditioned Reinforcement Learning to Solve Misunderstandings with Action Corrections
Abstract
Human-to-human conversation is not just talking and listening. It is an incremental process where participants continually establish a common understanding to rule out misunderstandings. Current language understanding methods for intelligent robots do not consider this. There exist numerous approaches considering non-understandings, but they ignore the incremental process of resolving misunderstandings. In this article, we present a first formalization and experimental validation of incremental action-repair for robotic instruction-following based on reinforcement learning. To evaluate our approach, we propose a collection of benchmark environments for action correction in language-conditioned reinforcement learning, utilizing a synthetic instructor to generate language goals and their corresponding corrections. We show that a reinforcement learning agent can successfully learn to understand incremental corrections of misunderstood instructions.
Cite
Text
Röder and Eppe. "Language-Conditioned Reinforcement Learning to Solve Misunderstandings with Action Corrections." NeurIPS 2022 Workshops: LaReL, 2022.Markdown
[Röder and Eppe. "Language-Conditioned Reinforcement Learning to Solve Misunderstandings with Action Corrections." NeurIPS 2022 Workshops: LaReL, 2022.](https://mlanthology.org/neuripsw/2022/roder2022neuripsw-languageconditioned/)BibTeX
@inproceedings{roder2022neuripsw-languageconditioned,
title = {{Language-Conditioned Reinforcement Learning to Solve Misunderstandings with Action Corrections}},
author = {Röder, Frank and Eppe, Manfred},
booktitle = {NeurIPS 2022 Workshops: LaReL},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/roder2022neuripsw-languageconditioned/}
}