Scale-Free Adversarial Reinforcement Learning
Abstract
This paper initiates the study of scale-free learning in Markov Decision Processes (MDPs), where the scale of rewards/losses is unknown to the learner. We design a generic algorithmic framework, \underline{S}cale \underline{C}lipping \underline{B}ound (\texttt{SCB}), and instantiate this framework in both the adversarial Multi-armed Bandit (MAB) setting and the adversarial MDP setting. Through this framework, we achieve the first minimax optimal expected regret bound and the first high-probability regret bound in scale-free adversarial MABs, resolving an open problem raised in \cite{hadiji2020adaptation}. On adversarial MDPs, our framework also give birth to the first scale-free RL algorithm with a $\tilde{\mathcal{O}}(\sqrt{T})$ high-probability regret guarantee.
Cite
Text
Chen and Zhang. "Scale-Free Adversarial Reinforcement Learning." Conference on Learning Theory, 2024.Markdown
[Chen and Zhang. "Scale-Free Adversarial Reinforcement Learning." Conference on Learning Theory, 2024.](https://mlanthology.org/colt/2024/chen2024colt-scalefree/)BibTeX
@inproceedings{chen2024colt-scalefree,
title = {{Scale-Free Adversarial Reinforcement Learning}},
author = {Chen, Mingyu and Zhang, Xuezhou},
booktitle = {Conference on Learning Theory},
year = {2024},
pages = {1068-1101},
volume = {247},
url = {https://mlanthology.org/colt/2024/chen2024colt-scalefree/}
}