RLeXplore: Accelerating Research in Intrinsically-Motivated Reinforcement Learning

Abstract

Extrinsic rewards can effectively guide reinforcement learning (RL) agents in specific tasks. However, extrinsic rewards frequently fall short in complex environments due to the significant human effort needed for their design and annotation. This limitation underscores the necessity for intrinsic rewards, which offer auxiliary and dense signals and can enable agents to learn in an unsupervised manner. Although various intrinsic reward formulations have been proposed, their implementation and optimization details are insufficiently explored and lack standardization, thereby hindering research progress. To address this gap, we introduce RLeXplore, a unified, highly modularized, and plug-and-play framework offering reliable implementations of eight state-of-the-art intrinsic reward methods. Furthermore, we conduct an in-depth study that identifies critical implementation details and establishes well-justified standard practices in intrinsically-motivated RL. Our documentation, examples, and source code are available at [https://github.com/RLE-Foundation/RLeXplore](https://github.com/RLE-Foundation/RLeXplore).

Cite

Text

Yuan et al. "RLeXplore: Accelerating Research in Intrinsically-Motivated Reinforcement Learning." Transactions on Machine Learning Research, 2025.

Markdown

[Yuan et al. "RLeXplore: Accelerating Research in Intrinsically-Motivated Reinforcement Learning." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/yuan2025tmlr-rlexplore/)

BibTeX

@article{yuan2025tmlr-rlexplore,
  title     = {{RLeXplore: Accelerating Research in Intrinsically-Motivated Reinforcement Learning}},
  author    = {Yuan, Mingqi and Castanyer, Roger Creus and Li, Bo and Jin, Xin and Zeng, Wenjun and Berseth, Glen},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/yuan2025tmlr-rlexplore/}
}