Representation Learning for Out-of-Distribution Generalization in Reinforcement Learning

Abstract

Learning data representations that are useful for various downstream tasks is a cornerstone of artificial intelligence. While existing methods are typically evaluated on downstream tasks such as classification or generative image quality, we propose to assess representations through their usefulness in downstream control tasks, such as reaching or pushing objects. By training over 10,000 reinforcement learning policies, we extensively evaluate to what extent different representation properties affect out-of-distribution (OOD) generalization. Finally, we demonstrate zero-shot transfer of these policies from simulation to the real world, without any domain randomization or fine-tuning. This paper aims to establish the first systematic characterization of the usefulness of learned representations for real-world OOD downstream tasks.

Cite

Text

Träuble et al. "Representation Learning for Out-of-Distribution Generalization in Reinforcement Learning." ICML 2021 Workshops: URL, 2021.

Markdown

[Träuble et al. "Representation Learning for Out-of-Distribution Generalization in Reinforcement Learning." ICML 2021 Workshops: URL, 2021.](https://mlanthology.org/icmlw/2021/trauble2021icmlw-representation/)

BibTeX

@inproceedings{trauble2021icmlw-representation,
  title     = {{Representation Learning for Out-of-Distribution Generalization in Reinforcement Learning}},
  author    = {Träuble, Frederik and Dittadi, Andrea and Wuthrich, Manuel and Widmaier, Felix and Gehler, Peter Vincent and Winther, Ole and Locatello, Francesco and Bachem, Olivier and Schölkopf, Bernhard and Bauer, Stefan},
  booktitle = {ICML 2021 Workshops: URL},
  year      = {2021},
  url       = {https://mlanthology.org/icmlw/2021/trauble2021icmlw-representation/}
}