Position: Benchmarking Is Limited in Reinforcement Learning Research
Abstract
Novel reinforcement learning algorithms, or improvements on existing ones, are commonly justified by evaluating their performance on benchmark environments and are compared to an ever-changing set of standard algorithms. However, despite numerous calls for improvements, experimental practices continue to produce misleading or unsupported claims. One reason for the ongoing substandard practices is that conducting rigorous benchmarking experiments requires substantial computational time. This work investigates the sources of increased computation costs in rigorous experiment designs. We show that conducting rigorous performance benchmarks will likely have computational costs that are often prohibitive. As a result, we argue for using an additional experimentation paradigm to overcome the limitations of benchmarking.
Cite
Text
Jordan et al. "Position: Benchmarking Is Limited in Reinforcement Learning Research." International Conference on Machine Learning, 2024.Markdown
[Jordan et al. "Position: Benchmarking Is Limited in Reinforcement Learning Research." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/jordan2024icml-position/)BibTeX
@inproceedings{jordan2024icml-position,
title = {{Position: Benchmarking Is Limited in Reinforcement Learning Research}},
author = {Jordan, Scott M. and White, Adam and Da Silva, Bruno Castro and White, Martha and Thomas, Philip S.},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {22551-22569},
volume = {235},
url = {https://mlanthology.org/icml/2024/jordan2024icml-position/}
}