Maximum Entropy SoftMax Policy Gradient via Entropy Advantage Estimation

Abstract

Entropy Regularisation is a widely adopted technique that enhances policy optimisation performance and stability. Maximum entropy reinforcement learning (MaxEnt RL) regularises policy evaluation by augmenting the objective with an entropy term, showing theoretical benefits in policy optimisation. However, its practical application in straightforward direct policy gradient settings remains surprisingly underexplored. We hypothesise that this is due to the difficulty of managing the entropy reward in practice. This paper proposes Entropy Advantage Policy Optimisation (EAPO), a simple method that facilitates MaxEnt RL implementation by separately estimating task and entropy objectives. Our empirical evaluations demonstrate that extending Proximal Policy Optimisation (PPO) and Trust Region Policy Optimisation (TRPO) within the MaxEnt framework improves optimisation performance, generalisation, and exploration in various environments. Moreover, our method provides a stable and performant MaxEnt RL algorithm for discrete action spaces.

Cite

Text

Choe and Kim. "Maximum Entropy SoftMax Policy Gradient via Entropy Advantage Estimation." International Joint Conference on Artificial Intelligence, 2025. doi:10.24963/IJCAI.2025/552

Markdown

[Choe and Kim. "Maximum Entropy SoftMax Policy Gradient via Entropy Advantage Estimation." International Joint Conference on Artificial Intelligence, 2025.](https://mlanthology.org/ijcai/2025/choe2025ijcai-maximum/) doi:10.24963/IJCAI.2025/552

BibTeX

@inproceedings{choe2025ijcai-maximum,
  title     = {{Maximum Entropy SoftMax Policy Gradient via Entropy Advantage Estimation}},
  author    = {Choe, Jean Seong Bjorn and Kim, Jong-Kook},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {4958-4966},
  doi       = {10.24963/IJCAI.2025/552},
  url       = {https://mlanthology.org/ijcai/2025/choe2025ijcai-maximum/}
}