Domain Adaptation in Reinforcement Learning via Latent Unified State Representation

Abstract

Despite the recent success of deep reinforcement learning (RL), domain adaptation remains an open problem. Although the generalization ability of RL agents is critical for the real-world applicability of Deep RL, zero-shot policy transfer is still a challenging problem since even minor visual changes could make the trained agent completely fail in the new task. To address this issue, we propose a two-stage RL agent that first learns a latent unified state representation (LUSR) which is consistent across multiple domains in the first stage, and then do RL training in one source domain based on LUSR in the second stage. The cross-domain consistency of LUSR allows the policy acquired from the source domain to generalize to other target domains without extra training. We first demonstrate our approach in variants of CarRacing games with customized manipulations, and then verify it in CARLA, an autonomous driving simulator with more complex and realistic visual observations. Our results show that this approach can achieve state-of-the-art domain adaptation performance in related RL tasks and outperforms prior approaches based on latent-representation based RL and image-to-image translation.

Cite

Text

Xing et al. "Domain Adaptation in Reinforcement Learning via Latent Unified State Representation." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I12.17251

Markdown

[Xing et al. "Domain Adaptation in Reinforcement Learning via Latent Unified State Representation." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/xing2021aaai-domain/) doi:10.1609/AAAI.V35I12.17251

BibTeX

@inproceedings{xing2021aaai-domain,
  title     = {{Domain Adaptation in Reinforcement Learning via Latent Unified State Representation}},
  author    = {Xing, Jinwei and Nagata, Takashi and Chen, Kexin and Zou, Xinyun and Neftci, Emre and Krichmar, Jeffrey L.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {10452-10459},
  doi       = {10.1609/AAAI.V35I12.17251},
  url       = {https://mlanthology.org/aaai/2021/xing2021aaai-domain/}
}