Option-Aware Temporally Abstracted Value for Offline Goal-Conditioned Reinforcement Learning

Abstract

Offline goal-conditioned reinforcement learning (GCRL) offers a practical learning paradigm in which goal-reaching policies are trained from abundant state–action trajectory datasets without additional environment interaction. However, offline GCRL still struggles with long-horizon tasks, even with recent advances that employ hierarchical policy structures, such as HIQL. Identifying the root cause of this challenge, we observe the following insight. Firstly, performance bottlenecks mainly stem from the high-level policy’s inability to generate appropriate subgoals. Secondly, when learning the high-level policy in the long-horizon regime, the sign of the advantage estimate frequently becomes incorrect. Thus, we argue that improving the value function to produce a clear advantage estimate for learning the high-level policy is essential. In this paper, we propose a simple yet effective solution: _**Option-aware Temporally Abstracted**_ value learning, dubbed **OTA**, which incorporates temporal abstraction into the temporal-difference learning process. By modifying the value update to be _option-aware_, our approach contracts the effective horizon length, enabling better advantage estimates even in long-horizon regimes. We experimentally show that the high-level policy learned using the OTA value function achieves strong performance on complex tasks from OGBench, a recently proposed offline GCRL benchmark, including maze navigation and visual robotic manipulation environments. Our code is available at https://github.com/ota-v/ota-v

Cite

Text

Ahn et al. "Option-Aware Temporally Abstracted Value for Offline Goal-Conditioned Reinforcement Learning." Advances in Neural Information Processing Systems, 2025.

Markdown

[Ahn et al. "Option-Aware Temporally Abstracted Value for Offline Goal-Conditioned Reinforcement Learning." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/ahn2025neurips-optionaware/)

BibTeX

@inproceedings{ahn2025neurips-optionaware,
  title     = {{Option-Aware Temporally Abstracted Value for Offline Goal-Conditioned Reinforcement Learning}},
  author    = {Ahn, Hongjoon and Choi, Heewoong and Han, Jisu and Moon, Taesup},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/ahn2025neurips-optionaware/}
}