Exploring the Training Robustness of Distributional Reinforcement Learning Against Noisy State Observations

Abstract

In real scenarios, state observations that an agent observes may contain measurement errors or adversarial noises, misleading the agent to take suboptimal actions or even collapse while training. In this paper, we study the training robustness of distributional Reinforcement Learning (RL), a class of state-of-the-art methods that estimate the whole distribution, as opposed to only the expectation, of the total return. Firstly, we validate the contraction of distributional Bellman operators in the State-Noisy Markov Decision Process (SN-MDP), a typical tabular case that incorporates both random and adversarial state observation noises. In the noisy setting with function approximation, we then analyze the vulnerability of least squared loss in expectation-based RL with either linear or nonlinear function approximation. By contrast, we theoretically characterize the bounded gradient norm of distributional RL loss based on the categorical parameterization equipped with the KL divergence. The resulting stable gradients while the optimization in distributional RL accounts for its better training robustness against state observation noises. Finally, extensive experiments on the suite of environments verified that distributional RL is less vulnerable against both random and adversarial noisy state observations compared with its expectation-based counterpart.

Cite

Text

Sun et al. "Exploring the Training Robustness of Distributional Reinforcement Learning Against Noisy State Observations." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2023. doi:10.1007/978-3-031-43424-2_3

Markdown

[Sun et al. "Exploring the Training Robustness of Distributional Reinforcement Learning Against Noisy State Observations." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2023.](https://mlanthology.org/ecmlpkdd/2023/sun2023ecmlpkdd-exploring/) doi:10.1007/978-3-031-43424-2_3

BibTeX

@inproceedings{sun2023ecmlpkdd-exploring,
  title     = {{Exploring the Training Robustness of Distributional Reinforcement Learning Against Noisy State Observations}},
  author    = {Sun, Ke and Zhao, Yingnan and Jui, Shangling and Kong, Linglong},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2023},
  pages     = {36-51},
  doi       = {10.1007/978-3-031-43424-2_3},
  url       = {https://mlanthology.org/ecmlpkdd/2023/sun2023ecmlpkdd-exploring/}
}