No-Regret Reinforcement Learning with Heavy-Tailed Rewards
Abstract
Reinforcement learning algorithms typically assume rewards to be sampled from light-tailed distributions, such as Gaussian or bounded. However, a wide variety of real-world systems generate rewards that follow heavy-tailed distributions. We consider such scenarios in the setting of undiscounted reinforcement learning. By constructing a lower bound, we show that the difficulty of learning heavy-tailed rewards asymptotically dominates the difficulty of learning transition probabilities. Leveraging techniques from robust mean estimation, we propose Heavy-UCRL2 and Heavy-Q-Learning, and show that they achieve near-optimal regret bounds in this setting. Our algorithms also naturally generalize to deep reinforcement learning applications; we instantiate Heavy-DQN as an example of this. We demonstrate that all of our algorithms outperform baselines on both synthetic MDPs and standard RL benchmarks.
Cite
Text
Zhuang and Sui. "No-Regret Reinforcement Learning with Heavy-Tailed Rewards." Artificial Intelligence and Statistics, 2021.Markdown
[Zhuang and Sui. "No-Regret Reinforcement Learning with Heavy-Tailed Rewards." Artificial Intelligence and Statistics, 2021.](https://mlanthology.org/aistats/2021/zhuang2021aistats-noregret/)BibTeX
@inproceedings{zhuang2021aistats-noregret,
title = {{No-Regret Reinforcement Learning with Heavy-Tailed Rewards}},
author = {Zhuang, Vincent and Sui, Yanan},
booktitle = {Artificial Intelligence and Statistics},
year = {2021},
pages = {3385-3393},
volume = {130},
url = {https://mlanthology.org/aistats/2021/zhuang2021aistats-noregret/}
}