CORL: Research-Oriented Deep Offline Reinforcement Learning Library

Abstract

CORL is an open-source library that provides thoroughly benchmarked single-file implementations of both deep offline and offline-to-online reinforcement learning algorithms. It emphasizes a simple developing experience with a straightforward codebase and a modern analysis tracking tool. In CORL, we isolate methods implementation into separate single files, making performance-relevant details easier to recognize. Additionally, an experiment tracking feature is available to help log metrics, hyperparameters, dependencies, and more to the cloud. Finally, we have ensured the reliability of the implementations by benchmarking commonly employed D4RL datasets providing a transparent source of results that can be reused for robust evaluation tools such as performance profiles, probability of improvement, or expected online performance.

Cite

Text

Tarasov et al. "CORL: Research-Oriented Deep Offline Reinforcement Learning Library." Neural Information Processing Systems, 2023.

Markdown

[Tarasov et al. "CORL: Research-Oriented Deep Offline Reinforcement Learning Library." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/tarasov2023neurips-corl/)

BibTeX

@inproceedings{tarasov2023neurips-corl,
  title     = {{CORL: Research-Oriented Deep Offline Reinforcement Learning Library}},
  author    = {Tarasov, Denis and Nikulin, Alexander and Akimov, Dmitry and Kurenkov, Vladislav and Kolesnikov, Sergey},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/tarasov2023neurips-corl/}
}