A Family of Robust Stochastic Operators for Reinforcement Learning

Abstract

We consider a new family of stochastic operators for reinforcement learning with the goal of alleviating negative effects and becoming more robust to approximation or estimation errors. Various theoretical results are established, which include showing that our family of operators preserve optimality and increase the action gap in a stochastic sense. Our empirical results illustrate the strong benefits of our robust stochastic operators, significantly outperforming the classical Bellman operator and recently proposed operators.

Cite

Text

Lu et al. "A Family of Robust Stochastic Operators for Reinforcement Learning." Neural Information Processing Systems, 2019.

Markdown

[Lu et al. "A Family of Robust Stochastic Operators for Reinforcement Learning." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/lu2019neurips-family/)

BibTeX

@inproceedings{lu2019neurips-family,
  title     = {{A Family of Robust Stochastic Operators for Reinforcement Learning}},
  author    = {Lu, Yingdong and Squillante, Mark and Wu, Chai Wah},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {15652-15662},
  url       = {https://mlanthology.org/neurips/2019/lu2019neurips-family/}
}