Prioritized Offline Goal-Swapping Experience Replay

Abstract

In goal-conditioned offline reinforcement learning, an agent learns from previously collected data to go to an arbitrary goal. Since the offline data only contains a finite number of trajectories, a main challenge is how to generate more data. Goal-swapping generates additional data by switching trajectory goals but while doing so produces a large number of invalid trajectories. To address this issue, we propose prioritized goal-swapping experience replay (PGSER). PGSER uses a pre-trained Q function to assign higher priority weights to goal swapped transitions that allow reaching the goal. In experiments, PGSER significantly improves over baselines in a wide range of benchmark tasks, including challenging previously unsuccessful dexterous in-hand manipulation tasks.

Cite

Text

Yang et al. "Prioritized Offline Goal-Swapping Experience Replay." ICLR 2023 Workshops: RRL, 2023.

Markdown

[Yang et al. "Prioritized Offline Goal-Swapping Experience Replay." ICLR 2023 Workshops: RRL, 2023.](https://mlanthology.org/iclrw/2023/yang2023iclrw-prioritized/)

BibTeX

@inproceedings{yang2023iclrw-prioritized,
  title     = {{Prioritized Offline Goal-Swapping Experience Replay}},
  author    = {Yang, Wenyan and Pajarinen, Joni and Cai, Dingding and Kamarainen, Joni-kristian},
  booktitle = {ICLR 2023 Workshops: RRL},
  year      = {2023},
  url       = {https://mlanthology.org/iclrw/2023/yang2023iclrw-prioritized/}
}