Zero-Shot Reinforcement Learning via Function Encoders

Abstract

Although reinforcement learning (RL) can solve many challenging sequential decision making problems, achieving zero-shot transfer across related tasks remains a challenge. The difficulty lies in finding a good representation for the current task so that the agent understands how it relates to previously seen tasks. To achieve zero-shot transfer, we introduce the function encoder, a representation learning algorithm which represents a function as a weighted combination of learned, non-linear basis functions. By using a function encoder to represent the reward function or the transition function, the agent has information on how the current task relates to previously seen tasks via a coherent vector representation. Thus, the agent is able to achieve transfer between related tasks at run time with no additional training. We demonstrate state-of-the-art data efficiency, asymptotic performance, and training stability in three RL fields by augmenting basic RL algorithms with a function encoder task representation.

Cite

Text

Ingebrand et al. "Zero-Shot Reinforcement Learning via Function Encoders." International Conference on Machine Learning, 2024.

Markdown

[Ingebrand et al. "Zero-Shot Reinforcement Learning via Function Encoders." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/ingebrand2024icml-zeroshot/)

BibTeX

@inproceedings{ingebrand2024icml-zeroshot,
  title     = {{Zero-Shot Reinforcement Learning via Function Encoders}},
  author    = {Ingebrand, Tyler and Zhang, Amy and Topcu, Ufuk},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {21007-21019},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/ingebrand2024icml-zeroshot/}
}