CleanRL: High-Quality Single-File Implementations of Deep Reinforcement Learning Algorithms
Abstract
CleanRL is an open-source library that provides high-quality single-file implementations of Deep Reinforcement Learning (DRL) algorithms. These single-file implementations are self-contained algorithm variant files such as dqn.py, ppo.py, and ppo_atari.py that individually include all algorithm variant's implementation details. Such a paradigm significantly reduces the complexity and the lines of code (LOC) in each implemented variant, which makes them quicker and easier to understand. This paradigm gives the researchers the most fine-grained control over all aspects of the algorithm in a single file, allowing them to prototype novel features quickly. Despite having succinct implementations, CleanRL's codebase is thoroughly documented and benchmarked to ensure performance is on par with reputable sources. As a result, CleanRL produces a repository tailor-fit for two purposes: 1) understanding all implementation details of DRL algorithms and 2) quickly prototyping novel features. CleanRL's source code can be found at https://github.com/vwxyzjn/cleanrl.
Cite
Text
Huang et al. "CleanRL: High-Quality Single-File Implementations of Deep Reinforcement Learning Algorithms." Machine Learning Open Source Software, 2022.Markdown
[Huang et al. "CleanRL: High-Quality Single-File Implementations of Deep Reinforcement Learning Algorithms." Machine Learning Open Source Software, 2022.](https://mlanthology.org/mloss/2022/huang2022jmlr-cleanrl/)BibTeX
@article{huang2022jmlr-cleanrl,
title = {{CleanRL: High-Quality Single-File Implementations of Deep Reinforcement Learning Algorithms}},
author = {Huang, Shengyi and Dossa, Rousslan Fernand Julien and Ye, Chang and Braga, Jeff and Chakraborty, Dipam and Mehta, Kinal and Araújo, João G.M.},
journal = {Machine Learning Open Source Software},
year = {2022},
pages = {1-18},
volume = {23},
url = {https://mlanthology.org/mloss/2022/huang2022jmlr-cleanrl/}
}