ShiftNorm: On Data Efficiency in Reinforcement Learning with Shift Normalization

Abstract

We propose ShiftNorm, a simple yet promising data augmentation that can be applied to standard model-free algorithms to improve sample-efficiency in high-dimensional image-based reinforcement learning (RL).Concretely, the differentiable ShiftNorm leverages original samples with reparameterized virtual samples, and hasten the image encoder to generate invariant representations. Our approach demonstrates certify substantial advances, enabling it to outperform the new state-of-the-art on 8 of 9 tasks on the DeepMind Control Suite at 500k steps.

Cite

Text

Liu et al. "ShiftNorm: On Data Efficiency in Reinforcement Learning with Shift Normalization." ICLR 2022 Workshops: GPL, 2022.

Markdown

[Liu et al. "ShiftNorm: On Data Efficiency in Reinforcement Learning with Shift Normalization." ICLR 2022 Workshops: GPL, 2022.](https://mlanthology.org/iclrw/2022/liu2022iclrw-shiftnorm/)

BibTeX

@inproceedings{liu2022iclrw-shiftnorm,
  title     = {{ShiftNorm: On Data Efficiency in Reinforcement Learning with Shift Normalization}},
  author    = {Liu, Sicong and Zhang, Xi Sheryl and Li, Yushuo and Zhang, Yifan and Cheng, Jian},
  booktitle = {ICLR 2022 Workshops: GPL},
  year      = {2022},
  url       = {https://mlanthology.org/iclrw/2022/liu2022iclrw-shiftnorm/}
}