State-Free Reinforcement Learning
Abstract
In this work, we study the \textit{state-free RL} problem, where the algorithm does not have the states information before interacting with the environment. Specifically, denote the reachable state set by $\mathcal{S}^\Pi := \{ s|\max_{\pi\in \Pi}q^{P, \pi}(s)>0 \}$, we design an algorithm which requires no information on the state space $S$ while having a regret that is completely independent of $\mathcal{S}$ and only depend on $\mathcal{S}^\Pi$. We view this as a concrete first step towards \textit{parameter-free RL}, with the goal of designing RL algorithms that require no hyper-parameter tuning.
Cite
Text
Chen et al. "State-Free Reinforcement Learning." Neural Information Processing Systems, 2024. doi:10.52202/079017-3738Markdown
[Chen et al. "State-Free Reinforcement Learning." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/chen2024neurips-statefree/) doi:10.52202/079017-3738BibTeX
@inproceedings{chen2024neurips-statefree,
title = {{State-Free Reinforcement Learning}},
author = {Chen, Mingyu and Pacchiano, Aldo and Zhang, Xuezhou},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-3738},
url = {https://mlanthology.org/neurips/2024/chen2024neurips-statefree/}
}