No More Pesky Hyperparameters: Offline Hyperparameter Tuning for RL

Abstract

The performance of reinforcement learning (RL) agents is sensitive to the choice of hyperparameters. In real-world settings like robotics or industrial control systems, however, testing different hyperparameter configurations directly on the environment can be financially prohibitive, dangerous, or time consuming. We focus on hyperparameter tuning from offline logs of data, to fully specify the hyperparameters for an RL agent that learns online in the real world. The approach is conceptually simple: we first learn a model of the environment from the offline data, which we call a calibration model, and then simulate learning in the calibration model to identify promising hyperparameters. Though such a natural idea is (likely) being used in industry, it has yet to be systematically investigated. We identify several criteria to make this strategy effective, and develop an approach that satisfies these criteria. We empirically investigate the method in a variety of settings to identify when it is effective and when it fails.

Cite

Text

Wang et al. "No More Pesky Hyperparameters: Offline Hyperparameter Tuning for RL." Transactions on Machine Learning Research, 2022.

Markdown

[Wang et al. "No More Pesky Hyperparameters: Offline Hyperparameter Tuning for RL." Transactions on Machine Learning Research, 2022.](https://mlanthology.org/tmlr/2022/wang2022tmlr-more/)

BibTeX

@article{wang2022tmlr-more,
  title     = {{No More Pesky Hyperparameters: Offline Hyperparameter Tuning for RL}},
  author    = {Wang, Han and Sakhadeo, Archit and White, Adam M and Bell, James M and Liu, Vincent and Zhao, Xutong and Liu, Puer and Kozuno, Tadashi and Fyshe, Alona and White, Martha},
  journal   = {Transactions on Machine Learning Research},
  year      = {2022},
  url       = {https://mlanthology.org/tmlr/2022/wang2022tmlr-more/}
}