Stabilizing Off-Policy Deep Reinforcement Learning from Pixels

Abstract

Off-policy reinforcement learning (RL) from pixel observations is notoriously unstable. As a result, many successful algorithms must combine different domain-specific practices and auxiliary losses to learn meaningful behaviors in complex environments. In this work, we provide novel analysis demonstrating that these instabilities arise from performing temporal-difference learning with a convolutional encoder and low-magnitude rewards. We show that this new visual deadly triad causes unstable training and premature convergence to degenerate solutions, a phenomenon we name catastrophic self-overfitting. Based on our analysis, we propose A-LIX, a method providing adaptive regularization to the encoder’s gradients that explicitly prevents the occurrence of catastrophic self-overfitting using a dual objective. By applying A-LIX, we significantly outperform the prior state-of-the-art on the DeepMind Control and Atari benchmarks without any data augmentation or auxiliary losses.

Cite

Text

Cetin et al. "Stabilizing Off-Policy Deep Reinforcement Learning from Pixels." International Conference on Machine Learning, 2022.

Markdown

[Cetin et al. "Stabilizing Off-Policy Deep Reinforcement Learning from Pixels." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/cetin2022icml-stabilizing/)

BibTeX

@inproceedings{cetin2022icml-stabilizing,
  title     = {{Stabilizing Off-Policy Deep Reinforcement Learning from Pixels}},
  author    = {Cetin, Edoardo and Ball, Philip J and Roberts, Stephen and Celiktutan, Oya},
  booktitle = {International Conference on Machine Learning},
  year      = {2022},
  pages     = {2784-2810},
  volume    = {162},
  url       = {https://mlanthology.org/icml/2022/cetin2022icml-stabilizing/}
}