A Max-Min Entropy Framework for Reinforcement Learning

Abstract

In this paper, we propose a max-min entropy framework for reinforcement learning (RL) to overcome the limitation of the soft actor-critic (SAC) algorithm implementing the maximum entropy RL in model-free sample-based learning. Whereas the maximum entropy RL guides learning for policies to reach states with high entropy in the future, the proposed max-min entropy framework aims to learn to visit states with low entropy and maximize the entropy of these low-entropy states to promote better exploration. For general Markov decision processes (MDPs), an efficient algorithm is constructed under the proposed max-min entropy framework based on disentanglement of exploration and exploitation. Numerical results show that the proposed algorithm yields drastic performance improvement over the current state-of-the-art RL algorithms.

Cite

Text

Han and Sung. "A Max-Min Entropy Framework for Reinforcement Learning." Neural Information Processing Systems, 2021.

Markdown

[Han and Sung. "A Max-Min Entropy Framework for Reinforcement Learning." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/han2021neurips-maxmin/)

BibTeX

@inproceedings{han2021neurips-maxmin,
  title     = {{A Max-Min Entropy Framework for Reinforcement Learning}},
  author    = {Han, Seungyul and Sung, Youngchul},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/han2021neurips-maxmin/}
}