Soft-Robust Actor-Critic Policy-Gradient

Abstract

Robust Reinforcement Learning aims to derive optimal behavior that accounts for model uncertainty in dynamical systems. However, previous studies have shown that by considering the worst case scenario, robust policies can be overly conservative. Our soft-robust framework is an attempt to overcome this issue. In this paper, we present a novel Soft-Robust Actor-Critic algorithm (SR-AC). It learns an optimal policy with respect to a distribution over an uncertainty set and stays robust to model uncertainty but avoids the conservativeness of robust strategies. We show the convergence of SR-AC and test the efficiency of our approach on different domains by comparing it against regular learning methods and their robust formulations.

Cite

Text

Derman et al. "Soft-Robust Actor-Critic Policy-Gradient." Conference on Uncertainty in Artificial Intelligence, 2018.

Markdown

[Derman et al. "Soft-Robust Actor-Critic Policy-Gradient." Conference on Uncertainty in Artificial Intelligence, 2018.](https://mlanthology.org/uai/2018/derman2018uai-soft/)

BibTeX

@inproceedings{derman2018uai-soft,
  title     = {{Soft-Robust Actor-Critic Policy-Gradient}},
  author    = {Derman, Esther and Mankowitz, Daniel J. and Mann, Timothy A. and Mannor, Shie},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2018},
  pages     = {208-218},
  url       = {https://mlanthology.org/uai/2018/derman2018uai-soft/}
}