Efficient Continuous Control with Double Actors and Regularized Critics

Abstract

How to obtain good value estimation is a critical problem in Reinforcement Learning (RL). Current value estimation methods in continuous control, such as DDPG and TD3, suffer from unnecessary over- or under- estimation. In this paper, we explore the potential of double actors, which has been neglected for a long time, for better value estimation in the continuous setting. First, we interestingly find that double actors improve the exploration ability of the agent. Next, we uncover the bias alleviation property of double actors in handling overestimation with single critic, and underestimation with double critics respectively. Finally, to mitigate the potentially pessimistic value estimate in double critics, we propose to regularize the critics under double actors architecture. Together, we present Double Actors Regularized Critics (DARC) algorithm. Extensive experiments on challenging continuous control benchmarks, MuJoCo and PyBullet, show that DARC significantly outperforms current baselines with higher average return and better sample efficiency.

Cite

Text

Lyu et al. "Efficient Continuous Control with Double Actors and Regularized Critics." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I7.20732

Markdown

[Lyu et al. "Efficient Continuous Control with Double Actors and Regularized Critics." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/lyu2022aaai-efficient/) doi:10.1609/AAAI.V36I7.20732

BibTeX

@inproceedings{lyu2022aaai-efficient,
  title     = {{Efficient Continuous Control with Double Actors and Regularized Critics}},
  author    = {Lyu, Jiafei and Ma, Xiaoteng and Yan, Jiangpeng and Li, Xiu},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {7655-7663},
  doi       = {10.1609/AAAI.V36I7.20732},
  url       = {https://mlanthology.org/aaai/2022/lyu2022aaai-efficient/}
}