CORL: Research-Oriented Deep Offline Reinforcement Learning Library
Abstract
CORL is an open-source library that provides single-file implementations of Deep Offline Reinforcement Learning algorithms. It emphasizes a simple developing experience with a straightforward codebase and a modern analysis tracking tool. In CORL, we isolate methods implementation into distinct single files, making performance-relevant details easier to recognise. Additionally, an experiment tracking feature is available to help log metrics, hyperparameters, dependencies, and more to the cloud. Finally, we have ensured the reliability of the implementations by benchmarking a commonly employed D4RL benchmark.
Cite
Text
Tarasov et al. "CORL: Research-Oriented Deep Offline Reinforcement Learning Library." NeurIPS 2022 Workshops: Offline_RL, 2022.Markdown
[Tarasov et al. "CORL: Research-Oriented Deep Offline Reinforcement Learning Library." NeurIPS 2022 Workshops: Offline_RL, 2022.](https://mlanthology.org/neuripsw/2022/tarasov2022neuripsw-corl/)BibTeX
@inproceedings{tarasov2022neuripsw-corl,
title = {{CORL: Research-Oriented Deep Offline Reinforcement Learning Library}},
author = {Tarasov, Denis and Nikulin, Alexander and Akimov, Dmitry and Kurenkov, Vladislav and Kolesnikov, Sergey},
booktitle = {NeurIPS 2022 Workshops: Offline_RL},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/tarasov2022neuripsw-corl/}
}