Reinforcement Learning with Dynamic Boltzmann SoftMax Updates
Abstract
Value function estimation is an important task in reinforcement learning, i.e., prediction. The Boltzmann softmax operator is a natural value estimator and can provide several benefits. However, it does not satisfy the non-expansion property, and its direct use may fail to converge even in value iteration. In this paper, we propose to update the value function with dynamic Boltzmann softmax (DBS) operator, which has good convergence property in the setting of planning and learning. Experimental results on GridWorld show that the DBS operator enables better estimation of the value function, which rectifies the convergence issue of the softmax operator. Finally, we propose the DBS-DQN algorithm by applying the DBS operator, which outperforms DQN substantially in 40 out of 49 Atari games.
Cite
Text
Pan et al. "Reinforcement Learning with Dynamic Boltzmann SoftMax Updates." International Joint Conference on Artificial Intelligence, 2020. doi:10.24963/IJCAI.2020/276Markdown
[Pan et al. "Reinforcement Learning with Dynamic Boltzmann SoftMax Updates." International Joint Conference on Artificial Intelligence, 2020.](https://mlanthology.org/ijcai/2020/pan2020ijcai-reinforcement/) doi:10.24963/IJCAI.2020/276BibTeX
@inproceedings{pan2020ijcai-reinforcement,
title = {{Reinforcement Learning with Dynamic Boltzmann SoftMax Updates}},
author = {Pan, Ling and Cai, Qingpeng and Meng, Qi and Chen, Wei and Huang, Longbo},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2020},
pages = {1992-1998},
doi = {10.24963/IJCAI.2020/276},
url = {https://mlanthology.org/ijcai/2020/pan2020ijcai-reinforcement/}
}