Sharing Experience in Multitask Reinforcement Learning

Abstract

In multitask reinforcement learning, tasks often have sub-tasks that share the same solution, even though the overall tasks are different. If the shared-portions could be effectively identified, then the learning process could be improved since all the samples between tasks in the shared space could be used. In this paper, we propose a Sharing Experience Framework (SEF) for simultaneously training of multiple tasks. In SEF, a confidence sharing agent uses task-specific rewards from the environment to identify similar parts that should be shared across tasks and defines those parts as shared-regions between tasks. The shared-regions are expected to guide task-policies sharing their experience during the learning process. The experiments highlight that our framework improves the performance and the stability of learning task-policies, and is possible to help task-policies avoid local optimums.

Cite

Text

Vuong et al. "Sharing Experience in Multitask Reinforcement Learning." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/505

Markdown

[Vuong et al. "Sharing Experience in Multitask Reinforcement Learning." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/vuong2019ijcai-sharing/) doi:10.24963/IJCAI.2019/505

BibTeX

@inproceedings{vuong2019ijcai-sharing,
  title     = {{Sharing Experience in Multitask Reinforcement Learning}},
  author    = {Vuong, Tung-Long and Van Nguyen, Do and Nguyen, Tai-Long and Bui, Cong-Minh and Kieu, Hai-Dang and Ta, Viet-Cuong and Tran, Quoc-Long and Le, Thanh Ha},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {3642-3648},
  doi       = {10.24963/IJCAI.2019/505},
  url       = {https://mlanthology.org/ijcai/2019/vuong2019ijcai-sharing/}
}