OGBench: Benchmarking Offline Goal-Conditioned RL
Abstract
Offline goal-conditioned reinforcement learning (GCRL) is a major problem in reinforcement learning (RL) because it provides a simple, unsupervised, and domain-agnostic way to acquire diverse behaviors and representations from unlabeled data without rewards. Despite the importance of this setting, we lack a standard benchmark that can systematically evaluate the capabilities of offline GCRL algorithms. In this work, we propose OGBench, a new, high-quality benchmark for algorithms research in offline goal-conditioned RL. OGBench consists of 8 types of environments, 85 datasets, and reference implementations of 6 representative offline GCRL algorithms. We have designed these challenging and realistic environments and datasets to directly probe different capabilities of algorithms, such as stitching, long-horizon reasoning, and the ability to handle high-dimensional inputs and stochasticity. While representative algorithms may rank similarly on prior benchmarks, our experiments reveal stark strengths and weaknesses in these different capabilities, providing a strong foundation for building new algorithms. Project page: https://seohong.me/projects/ogbench
Cite
Text
Park et al. "OGBench: Benchmarking Offline Goal-Conditioned RL." International Conference on Learning Representations, 2025.Markdown
[Park et al. "OGBench: Benchmarking Offline Goal-Conditioned RL." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/park2025iclr-ogbench/)BibTeX
@inproceedings{park2025iclr-ogbench,
title = {{OGBench: Benchmarking Offline Goal-Conditioned RL}},
author = {Park, Seohong and Frans, Kevin and Eysenbach, Benjamin and Levine, Sergey},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/park2025iclr-ogbench/}
}