Multi-Task Deep Reinforcement Learning for Continuous Action Control

Abstract

In this paper, we propose a deep reinforcement learning algorithm to learn multiple tasks concurrently. A new network architecture is proposed in the algorithm which reduces the number of parameters needed by more than 75% per task compared to typical single-task deep reinforcement learning algorithms. The proposed algorithm and network fuse images with sensor data and were tested with up to 12 movement-based control tasks on a simulated Pioneer 3AT robot equipped with a camera and range sensors. Results show that the proposed algorithm and network can learn skills that are as good as the skills learned by a comparable single-task learning algorithm. Results also show that learning performance is consistent even when the number of tasks and the number of constraints on the tasks increased.

Cite

Text

Yang et al. "Multi-Task Deep Reinforcement Learning for Continuous Action Control." International Joint Conference on Artificial Intelligence, 2017. doi:10.24963/IJCAI.2017/461

Markdown

[Yang et al. "Multi-Task Deep Reinforcement Learning for Continuous Action Control." International Joint Conference on Artificial Intelligence, 2017.](https://mlanthology.org/ijcai/2017/yang2017ijcai-multi/) doi:10.24963/IJCAI.2017/461

BibTeX

@inproceedings{yang2017ijcai-multi,
  title     = {{Multi-Task Deep Reinforcement Learning for Continuous Action Control}},
  author    = {Yang, Zhaoyang and Merrick, Kathryn E. and Abbass, Hussein A. and Jin, Lianwen},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2017},
  pages     = {3301-3307},
  doi       = {10.24963/IJCAI.2017/461},
  url       = {https://mlanthology.org/ijcai/2017/yang2017ijcai-multi/}
}