Offline Multitask Representation Learning for Reinforcement Learning

Abstract

We study offline multitask representation learning in reinforcement learning (RL), where a learner is provided with an offline dataset from different tasks that share a common representation and is asked to learn the shared representation. We theoretically investigate offline multitask low-rank RL, and propose a new algorithm called MORL for offline multitask representation learning. Furthermore, we examine downstream RL in reward-free, offline and online scenarios, where a new task is introduced to the agent that shares the same representation as the upstream offline tasks. Our theoretical results demonstrate the benefits of using the learned representation from the upstream offline task instead of directly learning the representation of the low-rank model.

Cite

Text

Ishfaq et al. "Offline Multitask Representation Learning for Reinforcement Learning." Neural Information Processing Systems, 2024. doi:10.52202/079017-2256

Markdown

[Ishfaq et al. "Offline Multitask Representation Learning for Reinforcement Learning." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/ishfaq2024neurips-offline/) doi:10.52202/079017-2256

BibTeX

@inproceedings{ishfaq2024neurips-offline,
  title     = {{Offline Multitask Representation Learning for Reinforcement Learning}},
  author    = {Ishfaq, Haque and Nguyen-Tang, Thanh and Feng, Songtao and Arora, Raman and Wang, Mengdi and Yin, Ming and Precup, Doina},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2256},
  url       = {https://mlanthology.org/neurips/2024/ishfaq2024neurips-offline/}
}