Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels
Abstract
We propose a simple data augmentation technique that can be applied to standard model-free reinforcement learning algorithms, enabling robust learning directly from pixels without the need for auxiliary losses or pre-training. The approach leverages input perturbations commonly used in computer vision tasks to transform input examples, as well as regularizing the value function and policy. Existing model-free approaches, such as Soft Actor-Critic (SAC), are not able to train deep networks effectively from image pixels. However, the addition of our augmentation method dramatically improves SAC’s performance, enabling it to reach state-of-the-art performance on the DeepMind control suite, surpassing model-based (Hafner et al., 2019; Lee et al., 2019; Hafner et al., 2018) methods and recently proposed contrastive learning (Srinivas et al., 2020). Our approach, which we dub DrQ: Data-regularized Q, can be combined with any model-free reinforcement learning algorithm. We further demonstrate this by applying it to DQN and significantly improve its data-efficiency on the Atari 100k benchmark.
Cite
Text
Yarats et al. "Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels." International Conference on Learning Representations, 2021.Markdown
[Yarats et al. "Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/yarats2021iclr-image/)BibTeX
@inproceedings{yarats2021iclr-image,
title = {{Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels}},
author = {Yarats, Denis and Kostrikov, Ilya and Fergus, Rob},
booktitle = {International Conference on Learning Representations},
year = {2021},
url = {https://mlanthology.org/iclr/2021/yarats2021iclr-image/}
}