Solving Complex Manipulation Tasks with Model-Assisted Model-Free Reinforcement Learning

Abstract

In this paper, we propose a novel deep reinforcement learning approach for improving the sample efficiency of a model-free actor-critic method by using a learned model to encourage exploration. The basic idea consists in generating artificial transitions with noisy actions, which can be used to update the critic. To counteract the model bias, we introduce a high initialization for the critic and two filters for the artificial transitions. Finally, we evaluate our approach with the TD3 algorithm on different robotic tasks and demonstrate that it achieves a better performance with higher sample efficiency than several other model-based and model-free methods.

Cite

Text

Hu and Weng. "Solving Complex Manipulation Tasks with Model-Assisted Model-Free Reinforcement Learning." Conference on Robot Learning, 2022.

Markdown

[Hu and Weng. "Solving Complex Manipulation Tasks with Model-Assisted Model-Free Reinforcement Learning." Conference on Robot Learning, 2022.](https://mlanthology.org/corl/2022/hu2022corl-solving/)

BibTeX

@inproceedings{hu2022corl-solving,
  title     = {{Solving Complex Manipulation Tasks with Model-Assisted Model-Free Reinforcement Learning}},
  author    = {Hu, Jianshu and Weng, Paul},
  booktitle = {Conference on Robot Learning},
  year      = {2022},
  pages     = {1299-1308},
  volume    = {205},
  url       = {https://mlanthology.org/corl/2022/hu2022corl-solving/}
}