Enhancing Transfer of Reinforcement Learning Agents with Abstract Contextual Embeddings

Abstract

Deep reinforcement learning (DRL) algorithms have seen great success in perform- ing a plethora of tasks, but often have trouble adapting to changes in the environ- ment. We address this issue by using reward machines (RM), a graph-based ab- straction of the underlying task to represent the current setting or context. Using a graph neural network (GNN), we embed the RMs into deep latent vector represen- tations and provide it to the agent to enhance its ability to adapt to new contexts. To the best of our knowledge, this is the first work to embed contextual abstractions and let the agent decide how to use them. Our preliminary empirical evaluation demonstrates improved sample efficiency of our approach upon context transfer on a set of grid navigation tasks.

Cite

Text

Azran et al. "Enhancing Transfer of Reinforcement Learning Agents with Abstract Contextual Embeddings." NeurIPS 2022 Workshops: nCSI, 2022.

Markdown

[Azran et al. "Enhancing Transfer of Reinforcement Learning Agents with Abstract Contextual Embeddings." NeurIPS 2022 Workshops: nCSI, 2022.](https://mlanthology.org/neuripsw/2022/azran2022neuripsw-enhancing/)

BibTeX

@inproceedings{azran2022neuripsw-enhancing,
  title     = {{Enhancing Transfer of Reinforcement Learning Agents with Abstract Contextual Embeddings}},
  author    = {Azran, Guy and Danesh, Mohamad Hosein and Albrecht, Stefano V and Keren, Sarah},
  booktitle = {NeurIPS 2022 Workshops: nCSI},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/azran2022neuripsw-enhancing/}
}