On the Data-Efficiency with Contrastive Image Transformation in Reinforcement Learning
Abstract
Data-efficiency has always been an essential issue in pixel-based reinforcement learning (RL). As the agent not only learns decision-making but also meaningful representations from images. The line of reinforcement learning with data augmentation shows significant improvements in sample-efficiency. However, it is challenging to guarantee the optimality invariant transformation, that is, the augmented data are readily recognized as a completely different state by the agent. In the end, we propose a contrastive invariant transformation (CoIT), a simple yet promising learnable data augmentation combined with standard model-free algorithms to improve sample-efficiency. Concretely, the differentiable CoIT leverages original samples with augmented samples and hastens the state encoder for a contrastive invariant embedding. We evaluate our approach on DeepMind Control Suite and Atari100K. Empirical results verify advances using CoIT, enabling it to outperform the new state-of-the-art on various tasks. Source code is available at https://github.com/mooricAnna/CoIT.
Cite
Text
Liu et al. "On the Data-Efficiency with Contrastive Image Transformation in Reinforcement Learning." International Conference on Learning Representations, 2023.Markdown
[Liu et al. "On the Data-Efficiency with Contrastive Image Transformation in Reinforcement Learning." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/liu2023iclr-dataefficiency/)BibTeX
@inproceedings{liu2023iclr-dataefficiency,
title = {{On the Data-Efficiency with Contrastive Image Transformation in Reinforcement Learning}},
author = {Liu, Sicong and Zhang, Xi Sheryl and Li, Yushuo and Zhang, Yifan and Cheng, Jian},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/liu2023iclr-dataefficiency/}
}