Bounded Robustness in Reinforcement Learning via Lexicographic Objectives

Abstract

Policy robustness in Reinforcement Learning may not be desirable at any cost: the alterations caused by robustness requirements from otherwise optimal policies should be explainable, quantifiable and formally verifiable. In this work we study how policies can be maximally robust to arbitrary observational noise by analysing how they are altered by this noise through a stochastic linear operator interpretation of the disturbances, and establish connections between robustness and properties of the noise kernel and of the underlying MDPs. Then, we construct sufficient conditions for policy robustness, and propose a robustness-inducing scheme, applicable to any policy gradient algorithm, that formally trades off expected policy utility for robustness through lexicographic optimisation, while preserving convergence and sub-optimality in the policy synthesis.

Cite

Text

Jarne Ornia et al. "Bounded Robustness in Reinforcement Learning via Lexicographic Objectives." Proceedings of the 6th Annual Learning for Dynamics & Control Conference, 2024.

Markdown

[Jarne Ornia et al. "Bounded Robustness in Reinforcement Learning via Lexicographic Objectives." Proceedings of the 6th Annual Learning for Dynamics & Control Conference, 2024.](https://mlanthology.org/l4dc/2024/jarneornia2024l4dc-bounded/)

BibTeX

@inproceedings{jarneornia2024l4dc-bounded,
  title     = {{Bounded Robustness in Reinforcement Learning via Lexicographic Objectives}},
  author    = {Jarne Ornia, Daniel and Romao, Licio and Hammond, Lewis and Jr, Manuel Mazo and Abate, Alessandro},
  booktitle = {Proceedings of the 6th Annual Learning for Dynamics & Control Conference},
  year      = {2024},
  pages     = {954-967},
  volume    = {242},
  url       = {https://mlanthology.org/l4dc/2024/jarneornia2024l4dc-bounded/}
}