Improved Corruption Robust Algorithms for Episodic Reinforcement Learning
Abstract
We study episodic reinforcement learning under unknown adversarial corruptions in both the rewards and the transition probabilities of the underlying system. We propose new algorithms which, compared to the existing results in \cite{lykouris2020corruption}, achieve strictly better regret bounds in terms of total corruptions for the tabular setting. To be specific, firstly, our regret bounds depend on more precise numerical values of total rewards corruptions and transition corruptions, instead of only on the total number of corrupted episodes. Secondly, our regret bounds are the first of their kind in the reinforcement learning setting to have the number of corruptions show up additively with respect to $\min\{ \sqrt{T},\text{PolicyGapComplexity} \}$ rather than multiplicatively. Our results follow from a general algorithmic framework that combines corruption-robust policy elimination meta-algorithms, and plug-in reward-free exploration sub-algorithms. Replacing the meta-algorithm or sub-algorithm may extend the framework to address other corrupted settings with potentially more structure.
Cite
Text
Chen et al. "Improved Corruption Robust Algorithms for Episodic Reinforcement Learning." International Conference on Machine Learning, 2021.Markdown
[Chen et al. "Improved Corruption Robust Algorithms for Episodic Reinforcement Learning." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/chen2021icml-improved/)BibTeX
@inproceedings{chen2021icml-improved,
title = {{Improved Corruption Robust Algorithms for Episodic Reinforcement Learning}},
author = {Chen, Yifang and Du, Simon and Jamieson, Kevin},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {1561-1570},
volume = {139},
url = {https://mlanthology.org/icml/2021/chen2021icml-improved/}
}