Corruption-Robust Offline Reinforcement Learning

Abstract

We study the adversarial robustness in offline reinforcement learning. Given a batch dataset consisting of tuples $(s, a, r, s’)$, an adversary is allowed to arbitrarily modify $\epsilon$ fraction of the tuples. From the corrupted dataset the learner aims to robustly identify a near-optimal policy. We first show that a worst-case $\Omega(d\epsilon)$ optimality gap is unavoidable in linear MDP of dimension $d$, even if the adversary only corrupts the reward element in a tuple. This contrasts with dimension-free results in robust supervised learning and best-known lower-bound in the online RL setting with corruption. Next, we propose robust variants of the Least-Square Value Iteration (LSVI) algorithm utilizing robust supervised learning oracles, which achieve near-matching performances in cases both with and without full data coverage. The algorithm requires the knowledge of $\epsilon$ to design the pessimism bonus in the no-coverage case. Surprisingly, in this case, the knowledge of $\epsilon$ is necessary, as we show that being adaptive to unknown $\epsilon$ is impossible. This again contrasts with recent results on corruption-robust online RL and implies that robust offline RL is a strictly harder problem.

Cite

Text

Zhang et al. "Corruption-Robust Offline Reinforcement Learning." Artificial Intelligence and Statistics, 2022.

Markdown

[Zhang et al. "Corruption-Robust Offline Reinforcement Learning." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/zhang2022aistats-corruptionrobust/)

BibTeX

@inproceedings{zhang2022aistats-corruptionrobust,
  title     = {{Corruption-Robust Offline Reinforcement Learning}},
  author    = {Zhang, Xuezhou and Chen, Yiding and Zhu, Xiaojin and Sun, Wen},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2022},
  pages     = {5757-5773},
  volume    = {151},
  url       = {https://mlanthology.org/aistats/2022/zhang2022aistats-corruptionrobust/}
}