LiFT: Unsupervised Reinforcement Learning with Foundation Models as Teachers
Abstract
We propose a framework that leverages foundation models as teachers, guiding a reinforcement learning agent to acquire semantically meaningful behavior without human intervention. In our framework, the agent receives task instructions grounded in a training environment from large language models. Then, a vision-language model guides the agent in learning the tasks by providing reward feedback. We demonstrate that our method can learn semantically meaningful skills in a challenging open-ended MineDojo environment, while prior works on unsupervised skill discovery methods struggle. Additionally, we discuss the observed challenges of using off-the-shelf foundation models as teachers and our efforts to address them.
Cite
Text
Nam et al. "LiFT: Unsupervised Reinforcement Learning with Foundation Models as Teachers." NeurIPS 2023 Workshops: ALOE, 2023.Markdown
[Nam et al. "LiFT: Unsupervised Reinforcement Learning with Foundation Models as Teachers." NeurIPS 2023 Workshops: ALOE, 2023.](https://mlanthology.org/neuripsw/2023/nam2023neuripsw-lift/)BibTeX
@inproceedings{nam2023neuripsw-lift,
title = {{LiFT: Unsupervised Reinforcement Learning with Foundation Models as Teachers}},
author = {Nam, Taewook and Lee, Juyong and Zhang, Jesse and Hwang, Sung Ju and Lim, Joseph J and Pertsch, Karl},
booktitle = {NeurIPS 2023 Workshops: ALOE},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/nam2023neuripsw-lift/}
}