Learning Values Across Many Orders of Magnitude
Abstract
Most learning algorithms are not invariant to the scale of the signal that is being approximated. We propose to adaptively normalize the targets used in the learning updates. This is important in value-based reinforcement learning, where the magnitude of appropriate value approximations can change over time when we update the policy of behavior. Our main motivation is prior work on learning to play Atari games, where the rewards were clipped to a predetermined range. This clipping facilitates learning across many different games with a single learning algorithm, but a clipped reward function can result in qualitatively different behavior. Using adaptive normalization we can remove this domain-specific heuristic without diminishing overall performance.
Cite
Text
van Hasselt et al. "Learning Values Across Many Orders of Magnitude." Neural Information Processing Systems, 2016.Markdown
[van Hasselt et al. "Learning Values Across Many Orders of Magnitude." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/vanhasselt2016neurips-learning/)BibTeX
@inproceedings{vanhasselt2016neurips-learning,
title = {{Learning Values Across Many Orders of Magnitude}},
author = {van Hasselt, Hado P and Guez, Arthur and Guez, Arthur and Hessel, Matteo and Mnih, Volodymyr and Silver, David},
booktitle = {Neural Information Processing Systems},
year = {2016},
pages = {4287-4295},
url = {https://mlanthology.org/neurips/2016/vanhasselt2016neurips-learning/}
}