Diversity Actor-Critic: Sample-Aware Entropy Regularization for Sample-Efficient Exploration

Abstract

In this paper, sample-aware policy entropy regularization is proposed to enhance the conventional policy entropy regularization for better exploration. Exploiting the sample distribution obtainable from the replay buffer, the proposed sample-aware entropy regularization maximizes the entropy of the weighted sum of the policy action distribution and the sample action distribution from the replay buffer for sample-efficient exploration. A practical algorithm named diversity actor-critic (DAC) is developed by applying policy iteration to the objective function with the proposed sample-aware entropy regularization. Numerical results show that DAC significantly outperforms existing recent algorithms for reinforcement learning.

Cite

Text

Han and Sung. "Diversity Actor-Critic: Sample-Aware Entropy Regularization for Sample-Efficient Exploration." International Conference on Machine Learning, 2021.

Markdown

[Han and Sung. "Diversity Actor-Critic: Sample-Aware Entropy Regularization for Sample-Efficient Exploration." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/han2021icml-diversity/)

BibTeX

@inproceedings{han2021icml-diversity,
  title     = {{Diversity Actor-Critic: Sample-Aware Entropy Regularization for Sample-Efficient Exploration}},
  author    = {Han, Seungyul and Sung, Youngchul},
  booktitle = {International Conference on Machine Learning},
  year      = {2021},
  pages     = {4018-4029},
  volume    = {139},
  url       = {https://mlanthology.org/icml/2021/han2021icml-diversity/}
}