Reward Centering
Abstract
We show that discounted methods for solving continuing reinforcement learning problems can perform significantly better if they center their rewards by subtracting out the rewards' empirical average. The improvement is substantial at commonly used discount factors and increases further as the discount factor approaches one. In addition, we show that if a _problem's_ rewards are shifted by a constant, then standard methods perform much worse, whereas methods with reward centering are unaffected. Estimating the average reward is straightforward in the on-policy setting; we propose a slightly more sophisticated method for the off-policy setting. Reward centering is a general idea, so we expect almost every reinforcement-learning algorithm to benefit by the addition of reward centering.
Cite
Text
Naik et al. "Reward Centering." ICML 2024 Workshops: ARLET, 2024.Markdown
[Naik et al. "Reward Centering." ICML 2024 Workshops: ARLET, 2024.](https://mlanthology.org/icmlw/2024/naik2024icmlw-reward/)BibTeX
@inproceedings{naik2024icmlw-reward,
title = {{Reward Centering}},
author = {Naik, Abhishek and Wan, Yi and Tomar, Manan and Sutton, Richard S.},
booktitle = {ICML 2024 Workshops: ARLET},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/naik2024icmlw-reward/}
}