Rewarding Episodic Visitation Discrepancy for Exploration in Reinforcement Learning

Abstract

Exploration is critical for deep reinforcement learning in complex environments with high-dimensional observations and sparse rewards. To address this problem, recent approaches proposed to leverage intrinsic rewards to improve exploration, such as novelty-based exploration and prediction-based exploration. However, many intrinsic reward modules require sophisticated structures and representation learning, resulting in prohibitive computational complexity and unstable performance. In this paper, we propose Rewarding Episodic Visitation Discrepancy (REVD), a computation-efficient and quantified exploration method. More specifically, REVD provides intrinsic rewards by evaluating the Rényi divergence-based visitation discrepancy between episodes. To estimate the divergence efficiently, a $k$-nearest neighbor estimator is utilized with a randomly-initialized state encoder. Finally, the REVD is tested on Atari games and PyBullet Robotics Environments. Extensive experiments demonstrate that REVD can significantly improve the sample efficiency of reinforcement learning algorithms and outperform the benchmarking methods.

Cite

Text

Yuan et al. "Rewarding Episodic Visitation Discrepancy for Exploration in Reinforcement Learning." NeurIPS 2022 Workshops: DeepRL, 2022.

Markdown

[Yuan et al. "Rewarding Episodic Visitation Discrepancy for Exploration in Reinforcement Learning." NeurIPS 2022 Workshops: DeepRL, 2022.](https://mlanthology.org/neuripsw/2022/yuan2022neuripsw-rewarding/)

BibTeX

@inproceedings{yuan2022neuripsw-rewarding,
  title     = {{Rewarding Episodic Visitation Discrepancy for Exploration in Reinforcement Learning}},
  author    = {Yuan, Mingqi and Li, Bo and Jin, Xin and Zeng, Wenjun},
  booktitle = {NeurIPS 2022 Workshops: DeepRL},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/yuan2022neuripsw-rewarding/}
}