LAD: Language Augmented Diffusion for Reinforcement Learning
Abstract
Learning skills from language potentially provides a powerful avenue for generalization in RL, although it remains a challenging task as it requires agents to capture the complex interdependencies between language, actions and states, also known as language grounding. In this paper, we propose leveraging Language Augmented Diffusion models as a language-to-plan generator (LAD). We demonstrate comparable performance of LAD with the state of the art on the CALVIN benchmark with a much simpler architecture and conduct an analysis on the properties of language conditioned diffusion in reinforcement learning.
Cite
Text
Zhang et al. "LAD: Language Augmented Diffusion for Reinforcement Learning." NeurIPS 2022 Workshops: LaReL, 2022.Markdown
[Zhang et al. "LAD: Language Augmented Diffusion for Reinforcement Learning." NeurIPS 2022 Workshops: LaReL, 2022.](https://mlanthology.org/neuripsw/2022/zhang2022neuripsw-lad/)BibTeX
@inproceedings{zhang2022neuripsw-lad,
title = {{LAD: Language Augmented Diffusion for Reinforcement Learning}},
author = {Zhang, Edwin and Lu, Yujie and Wang, William Yang and Zhang, Amy},
booktitle = {NeurIPS 2022 Workshops: LaReL},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/zhang2022neuripsw-lad/}
}