A Survey on Reproducibility by Evaluating Deep Reinforcement Learning Algorithms on Real-World Robots
Abstract
As reinforcement learning (RL) achieves more success in solving complex tasks, more care is needed to ensure that RL research is reproducible and that algorithms therein can be compared easily and fairly with minimal bias. RL results are, however, notoriously hard to reproduce due to the algorithms’ intrinsic variance, the environments’ stochasticity, and numerous (potentially unreported) hyper-parameters. In this work we investigate the many issues leading to irreproducible research and how to manage those. We further show how to utilise a rigorous and standardised evaluation approach for easing the process of documentation, evaluation and fair comparison of different algorithms, where we emphasise the importance of choosing the right measurement metrics and conducting proper statistics on the results, for unbiased reporting of the results.
Cite
Text
Lynnerup et al. "A Survey on Reproducibility by Evaluating Deep Reinforcement Learning Algorithms on Real-World Robots." Conference on Robot Learning, 2019.Markdown
[Lynnerup et al. "A Survey on Reproducibility by Evaluating Deep Reinforcement Learning Algorithms on Real-World Robots." Conference on Robot Learning, 2019.](https://mlanthology.org/corl/2019/lynnerup2019corl-survey/)BibTeX
@inproceedings{lynnerup2019corl-survey,
title = {{A Survey on Reproducibility by Evaluating Deep Reinforcement Learning Algorithms on Real-World Robots}},
author = {Lynnerup, Nicolai A. and Nolling, Laura and Hasle, Rasmus and Hallam, John},
booktitle = {Conference on Robot Learning},
year = {2019},
pages = {466-489},
volume = {100},
url = {https://mlanthology.org/corl/2019/lynnerup2019corl-survey/}
}