Increasing the Action Gap: New Operators for Reinforcement Learning
Abstract
This paper introduces new optimality-preserving operators on Q-functions. We first describe an operator for tabular representations, the consistent Bellman operator, which incorporates a notion of local policy consistency. We show that this local consistency leads to an increase in the action gap at each state; increasing this gap, we argue, mitigates the undesirable effects of approximation and estimation errors on the induced greedy policies. This operator can also be applied to discretized continuous space and time problems, and we provide empirical results evidencing superior performance in this context. Extending the idea of a locally consistent operator, we then derive sufficient conditions for an operator to preserve optimality, leading to a family of operators which includes our consistent Bellman operator. As corollaries we provide a proof of optimality for Baird's advantage learning algorithm and derive other gap-increasing operators with interesting properties. We conclude with an empirical study on 60 Atari 2600 games illustrating the strong potential of these new operators.
Cite
Text
Bellemare et al. "Increasing the Action Gap: New Operators for Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2016. doi:10.1609/AAAI.V30I1.10303Markdown
[Bellemare et al. "Increasing the Action Gap: New Operators for Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2016.](https://mlanthology.org/aaai/2016/bellemare2016aaai-increasing/) doi:10.1609/AAAI.V30I1.10303BibTeX
@inproceedings{bellemare2016aaai-increasing,
title = {{Increasing the Action Gap: New Operators for Reinforcement Learning}},
author = {Bellemare, Marc G. and Ostrovski, Georg and Guez, Arthur and Thomas, Philip S. and Munos, Rémi},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2016},
pages = {1476-1483},
doi = {10.1609/AAAI.V30I1.10303},
url = {https://mlanthology.org/aaai/2016/bellemare2016aaai-increasing/}
}