Episodic Novelty Through Temporal Distance
Abstract
Exploration in sparse reward environments remains a significant challenge in reinforcement learning, particularly in Contextual Markov Decision Processes (CMDPs), where environments differ across episodes. Existing episodic intrinsic motivation methods for CMDPs primarily rely on count-based approaches, which are ineffective in large state spaces, or on similarity-based methods that lack appropriate metrics for state comparison. To address these shortcomings, we propose Episodic Novelty Through Temporal Distance (ETD), a novel approach that introduces temporal distance as a robust metric for state similarity and intrinsic reward computation. By employing contrastive learning, ETD accurately estimates temporal distances and derives intrinsic rewards based on the novelty of states within the current episode. Experiments on challenging MiniGrid tasks demonstrate that ETD significantly outperforms state-of-the-art methods, highlighting its effectiveness in enhancing exploration and generalization in sparse reward CMDPs.
Cite
Text
Jiang et al. "Episodic Novelty Through Temporal Distance." NeurIPS 2024 Workshops: IMOL, 2024.Markdown
[Jiang et al. "Episodic Novelty Through Temporal Distance." NeurIPS 2024 Workshops: IMOL, 2024.](https://mlanthology.org/neuripsw/2024/jiang2024neuripsw-episodic/)BibTeX
@inproceedings{jiang2024neuripsw-episodic,
title = {{Episodic Novelty Through Temporal Distance}},
author = {Jiang, Yuhua and Liu, Qihan and Yang, Yiqin and Ma, Xiaoteng and Zhong, Dianyu and Xu, Bo and Yang, Jun and Liang, Bin and Zhang, Chongjie and Zhao, Qianchuan},
booktitle = {NeurIPS 2024 Workshops: IMOL},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/jiang2024neuripsw-episodic/}
}