How Does Goal Relabeling Improve Sample Efficiency?
Abstract
Hindsight experience replay and goal relabeling are successful in reinforcement learning (RL) since they enable agents to learn from failures. Despite their successes, we lack a theoretical understanding, such as (i) why hindsight experience replay improves sample efficiency and (ii) how to design a relabeling method that achieves sample efficiency. To this end, we construct an example to show the information-theoretical improvement in sample efficiency achieved by goal relabeling. Our example reveals that goal relabeling can enhance sample efficiency and exploit the rich information in observations through better hypothesis elimination. Based on these insights, we develop an RL algorithm called GOALIVE. To analyze the sample complexity of GOALIVE, we introduce a complexity measure, the goal-conditioned Bellman-Eluder (GOAL-BE) dimension, which characterizes the sample complexity of goal-conditioned RL problems. Compared to the Bellman-Eluder dimension, the goal-conditioned version offers an exponential improvement in the best case. To the best of our knowledge, our work provides the first characterization of the theoretical improvement in sample efficiency achieved by goal relabeling.
Cite
Text
Zheng et al. "How Does Goal Relabeling Improve Sample Efficiency?." International Conference on Machine Learning, 2024.Markdown
[Zheng et al. "How Does Goal Relabeling Improve Sample Efficiency?." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/zheng2024icml-goal/)BibTeX
@inproceedings{zheng2024icml-goal,
title = {{How Does Goal Relabeling Improve Sample Efficiency?}},
author = {Zheng, Sirui and Bai, Chenjia and Yang, Zhuoran and Wang, Zhaoran},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {61246-61266},
volume = {235},
url = {https://mlanthology.org/icml/2024/zheng2024icml-goal/}
}