Pearl: A Production-Ready Reinforcement Learning Agent
Abstract
Reinforcement learning (RL) is a versatile framework for optimizing long-term goals. Although many real-world problems can be formalized with RL, learning and deploying a performant RL policy requires a system designed to address several important challenges, including the exploration-exploitation dilemma, partial observability, dynamic action spaces, and safety concerns. While the importance of these challenges has been well recognized, existing open-source RL libraries do not explicitly address them. This paper introduces Pearl, a Production-Ready RL software package designed to embrace these challenges in a modular way. In addition to presenting benchmarking results, we also highlight examples of Pearl's ongoing industry adoption to demonstrate its advantages for production use cases. Pearl is open sourced on GitHub at github.com/facebookresearch/pearl and its official website is pearlagent.github.io.
Cite
Text
Zhu et al. "Pearl: A Production-Ready Reinforcement Learning Agent." Machine Learning Open Source Software, 2024.Markdown
[Zhu et al. "Pearl: A Production-Ready Reinforcement Learning Agent." Machine Learning Open Source Software, 2024.](https://mlanthology.org/mloss/2024/zhu2024jmlr-pearl/)BibTeX
@article{zhu2024jmlr-pearl,
title = {{Pearl: A Production-Ready Reinforcement Learning Agent}},
author = {Zhu, Zheqing and de Salvo Braz, Rodrigo and Bhandari, Jalaj and Jiang, Daniel and Wan, Yi and Efroni, Yonathan and Wang, Liyuan and Xu, Ruiyang and Guo, Hongbo and Nikulkov, Alex and Korenkevych, Dmytro and Dogan, Urun and Cheng, Frank and Wu, Zheng and Xu, Wanqiao},
journal = {Machine Learning Open Source Software},
year = {2024},
pages = {1-30},
volume = {25},
url = {https://mlanthology.org/mloss/2024/zhu2024jmlr-pearl/}
}