Exploration in Approximate Hyper-State Space for Meta Reinforcement Learning

Abstract

To rapidly learn a new task, it is often essential for agents to explore efficiently - especially when performance matters from the first timestep. One way to learn such behaviour is via meta-learning. Many existing methods however rely on dense rewards for meta-training, and can fail catastrophically if the rewards are sparse. Without a suitable reward signal, the need for exploration during meta-training is exacerbated. To address this, we propose HyperX, which uses novel reward bonuses for meta-training to explore in approximate hyper-state space (where hyper-states represent the environment state and the agent’s task belief). We show empirically that HyperX meta-learns better task-exploration and adapts more successfully to new tasks than existing methods.

Cite

Text

Zintgraf et al. "Exploration in Approximate Hyper-State Space for Meta Reinforcement Learning." International Conference on Machine Learning, 2021.

Markdown

[Zintgraf et al. "Exploration in Approximate Hyper-State Space for Meta Reinforcement Learning." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/zintgraf2021icml-exploration/)

BibTeX

@inproceedings{zintgraf2021icml-exploration,
  title     = {{Exploration in Approximate Hyper-State Space for Meta Reinforcement Learning}},
  author    = {Zintgraf, Luisa M and Feng, Leo and Lu, Cong and Igl, Maximilian and Hartikainen, Kristian and Hofmann, Katja and Whiteson, Shimon},
  booktitle = {International Conference on Machine Learning},
  year      = {2021},
  pages     = {12991-13001},
  volume    = {139},
  url       = {https://mlanthology.org/icml/2021/zintgraf2021icml-exploration/}
}