PiCor: Multi-Task Deep Reinforcement Learning with Policy Correction
Abstract
Multi-task deep reinforcement learning (DRL) ambitiously aims to train a general agent that masters multiple tasks simultaneously. However, varying learning speeds of different tasks compounding with negative gradients interference makes policy learning inefficient. In this work, we propose PiCor, an efficient multi-task DRL framework that splits learning into policy optimization and policy correction phases. The policy optimization phase improves the policy by any DRL algothrim on the sampled single task without considering other tasks. The policy correction phase first constructs an adaptive adjusted performance constraint set. Then the intermediate policy learned by the first phase is constrained to the set, which controls the negative interference and balances the learning speeds across tasks. Empirically, we demonstrate that PiCor outperforms previous methods and significantly improves sample efficiency on simulated robotic manipulation and continuous control tasks. We additionally show that adaptive weight adjusting can further improve data efficiency and performance.
Cite
Text
Bai et al. "PiCor: Multi-Task Deep Reinforcement Learning with Policy Correction." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I6.25825Markdown
[Bai et al. "PiCor: Multi-Task Deep Reinforcement Learning with Policy Correction." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/bai2023aaai-picor/) doi:10.1609/AAAI.V37I6.25825BibTeX
@inproceedings{bai2023aaai-picor,
title = {{PiCor: Multi-Task Deep Reinforcement Learning with Policy Correction}},
author = {Bai, Fengshuo and Zhang, Hongming and Tao, Tianyang and Wu, Zhiheng and Wang, Yanna and Xu, Bo},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {6728-6736},
doi = {10.1609/AAAI.V37I6.25825},
url = {https://mlanthology.org/aaai/2023/bai2023aaai-picor/}
}