Your Policy Regularizer Is Secretly an Adversary

Abstract

Policy regularization methods such as maximum entropy regularization are widely used in reinforcement learning to improve the robustness of a learned policy. In this paper, we unify and extend recent work showing that this robustness arises from hedging against worst-case perturbations of the reward function, which are chosen from a limited set by an implicit adversary. Using convex duality, we characterize the robust set of adversarial reward perturbations under KL- and $\alpha$-divergence regularization, which includes Shannon and Tsallis entropy regularization as special cases. Importantly, generalization guarantees can be given within this robust set. We provide detailed discussion of the worst-case reward perturbations, and present intuitive empirical examples to illustrate this robustness and its relationship with generalization. Finally, we discuss how our analysis complements previous results on adversarial reward robustness and path consistency optimality conditions.

Cite

Text

Brekelmans et al. "Your Policy Regularizer Is Secretly an Adversary." Transactions on Machine Learning Research, 2022.

Markdown

[Brekelmans et al. "Your Policy Regularizer Is Secretly an Adversary." Transactions on Machine Learning Research, 2022.](https://mlanthology.org/tmlr/2022/brekelmans2022tmlr-your/)

BibTeX

@article{brekelmans2022tmlr-your,
  title     = {{Your Policy Regularizer Is Secretly an Adversary}},
  author    = {Brekelmans, Rob and Genewein, Tim and Grau-Moya, Jordi and Detetang, Gregoire and Kunesch, Markus and Legg, Shane and Ortega, Pedro A},
  journal   = {Transactions on Machine Learning Research},
  year      = {2022},
  url       = {https://mlanthology.org/tmlr/2022/brekelmans2022tmlr-your/}
}