Robust Offline Reinforcement Learning with Heavy-Tailed Rewards
Abstract
This paper endeavors to augment the robustness of offline reinforcement learning (RL) in scenarios laden with heavy-tailed rewards, a prevalent circumstance in real-world applications. We propose two algorithmic frameworks, ROAM and ROOM, for robust off-policy evaluation and offline policy optimization (OPO), respectively. Central to our frameworks is the strategic incorporation of the median-of-means method with offline RL, enabling straightforward uncertainty estimation for the value function estimator. This not only adheres to the principle of pessimism in OPO but also adeptly manages heavy-tailed rewards. Theoretical results and extensive experiments demonstrate that our two frameworks outperform existing methods on the logged dataset exhibits heavy-tailed reward distributions. The implementation of the proposal is available at \url{https://github.com/Mamba413/ROOM}.
Cite
Text
Zhu et al. "Robust Offline Reinforcement Learning with Heavy-Tailed Rewards." Artificial Intelligence and Statistics, 2024.Markdown
[Zhu et al. "Robust Offline Reinforcement Learning with Heavy-Tailed Rewards." Artificial Intelligence and Statistics, 2024.](https://mlanthology.org/aistats/2024/zhu2024aistats-robust/)BibTeX
@inproceedings{zhu2024aistats-robust,
title = {{Robust Offline Reinforcement Learning with Heavy-Tailed Rewards}},
author = {Zhu, Jin and Wan, Runzhe and Qi, Zhengling and Luo, Shikai and Shi, Chengchun},
booktitle = {Artificial Intelligence and Statistics},
year = {2024},
pages = {541-549},
volume = {238},
url = {https://mlanthology.org/aistats/2024/zhu2024aistats-robust/}
}