Pre-Trained Word Embeddings for Goal-Conditional Transfer Learning in Reinforcement Learning

Abstract

Reinforcement learning (RL) algorithms typically start tabula rasa, without any prior knowledge of the environment, and without any prior skills. This however often leads to low sample efficiency, requiring a large amount of interaction with the environment. This is especially true in a lifelong learning setting, in which the agent needs to continually extend its capabilities. In this paper, we examine how a pre-trained task-independent language model can make a goal-conditional RL agent more sample efficient. We do this by facilitating transfer learning between different related tasks. We experimentally demonstrate our approach on a set of object navigation tasks.

Cite

Text

Hutsebaut-Buysse et al. "Pre-Trained Word Embeddings for Goal-Conditional Transfer Learning in Reinforcement Learning." ICML 2020 Workshops: LaReL, 2020.

Markdown

[Hutsebaut-Buysse et al. "Pre-Trained Word Embeddings for Goal-Conditional Transfer Learning in Reinforcement Learning." ICML 2020 Workshops: LaReL, 2020.](https://mlanthology.org/icmlw/2020/hutsebautbuysse2020icmlw-pretrained/)

BibTeX

@inproceedings{hutsebautbuysse2020icmlw-pretrained,
  title     = {{Pre-Trained Word Embeddings for Goal-Conditional Transfer Learning in Reinforcement Learning}},
  author    = {Hutsebaut-Buysse, Matthias and Mets, Kevin and Latré, Steven},
  booktitle = {ICML 2020 Workshops: LaReL},
  year      = {2020},
  url       = {https://mlanthology.org/icmlw/2020/hutsebautbuysse2020icmlw-pretrained/}
}