Adaptively Calibrated Critic Estimates for Deep Reinforcement Learning

Abstract

Accurate value estimates are important for off-policy reinforcement learning. Algorithms based on temporal difference learning typically are prone to an over- or underestimation bias building up over time. In this paper, we propose a general method called Adaptively Calibrated Critics (ACC) that uses the most recent high variance but unbiased on-policy rollouts to alleviate the bias of the low variance temporal difference targets. We apply ACC to Truncated Quantile Critics [22], which is an algorithm for continuous control that allows regulation of the bias with a hyperparameter tuned per environment. The resulting algorithm adaptively adjusts the parameter during training rendering hyperparameter search unnecessary and sets a new state of the art on the OpenAI gym continuous control benchmark among all algorithms that do not tune hyperparameters for each environment. Additionally, we demonstrate that ACC is quite general by further applying it to TD3 [11] and showing an improved performance also in this setting.

Cite

Text

Dorka et al. "Adaptively Calibrated Critic Estimates for Deep Reinforcement Learning." NeurIPS 2021 Workshops: DeepRL, 2021.

Markdown

[Dorka et al. "Adaptively Calibrated Critic Estimates for Deep Reinforcement Learning." NeurIPS 2021 Workshops: DeepRL, 2021.](https://mlanthology.org/neuripsw/2021/dorka2021neuripsw-adaptively/)

BibTeX

@inproceedings{dorka2021neuripsw-adaptively,
  title     = {{Adaptively Calibrated Critic Estimates for Deep Reinforcement Learning}},
  author    = {Dorka, Nicolai and Boedecker, Joschka and Burgard, Wolfram},
  booktitle = {NeurIPS 2021 Workshops: DeepRL},
  year      = {2021},
  url       = {https://mlanthology.org/neuripsw/2021/dorka2021neuripsw-adaptively/}
}