A Dual Representation Framework for Robot Learning with Human Guidance

Abstract

The ability to interactively learn skills from human guidance and adjust behavior according to human preference is crucial to accelerating robot learning. But human guidance is an expensive resource, calling for methods that can learn efficiently. In this work, we argue that learning is more efficient if the agent is equipped with a high-level, symbolic representation. We propose a dual representation framework for robot learning from human guidance. The dual representation used by the robotic agent includes one for learning a sensorimotor control policy, and the other, in the form of a symbolic scene graph, for encoding the task-relevant information that motivates human input. We propose two novel learning algorithms based on this framework for learning from human evaluative feedback and from preference. In five continuous control tasks in simulation and in the real world, we demonstrate that our algorithms lead to significant improvement in task performance and learning speed. Additionally, these algorithms require less human effort and are qualitatively preferred by users.

Cite

Text

Zhang et al. "A Dual Representation Framework for Robot Learning with Human Guidance." Conference on Robot Learning, 2022.

Markdown

[Zhang et al. "A Dual Representation Framework for Robot Learning with Human Guidance." Conference on Robot Learning, 2022.](https://mlanthology.org/corl/2022/zhang2022corl-dual/)

BibTeX

@inproceedings{zhang2022corl-dual,
  title     = {{A Dual Representation Framework for Robot Learning with Human Guidance}},
  author    = {Zhang, Ruohan and Bansal, Dhruva and Hao, Yilun and Hiranaka, Ayano and Gao, Jialu and Wang, Chen and Martín-Martín, Roberto and Fei-Fei, Li and Wu, Jiajun},
  booktitle = {Conference on Robot Learning},
  year      = {2022},
  pages     = {738-750},
  volume    = {205},
  url       = {https://mlanthology.org/corl/2022/zhang2022corl-dual/}
}