Reinforcement Learning with Random Delays

Abstract

Action and observation delays commonly occur in many Reinforcement Learning applications, such as remote control scenarios. We study the anatomy of randomly delayed environments, and show that partially resampling trajectory fragments in hindsight allows for off-policy multi-step value estimation. We apply this principle to derive Delay-Correcting Actor-Critic (DCAC), an algorithm based on Soft Actor-Critic with significantly better performance in environments with delays. This is shown theoretically and also demonstrated practically on a delay-augmented version of the MuJoCo continuous control benchmark.

Cite

Text

Bouteiller et al. "Reinforcement Learning with Random Delays." International Conference on Learning Representations, 2021.

Markdown

[Bouteiller et al. "Reinforcement Learning with Random Delays." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/bouteiller2021iclr-reinforcement/)

BibTeX

@inproceedings{bouteiller2021iclr-reinforcement,
  title     = {{Reinforcement Learning with Random Delays}},
  author    = {Bouteiller, Yann and Ramstedt, Simon and Beltrame, Giovanni and Pal, Christopher and Binas, Jonathan},
  booktitle = {International Conference on Learning Representations},
  year      = {2021},
  url       = {https://mlanthology.org/iclr/2021/bouteiller2021iclr-reinforcement/}
}