State Advantage Weighting for Offline RL
Abstract
We present \textit{state advantage weighting} for offline reinforcement learning (RL). In contrast to action advantage $A(s,a)$ that we commonly adopt in QSA learning, we leverage state advantage $A(s,s^\prime)$ and QSS learning for offline RL, hence decoupling the action from values. We expect the agent can get to the high-reward state and the action is determined by how the agent can get to that corresponding state. Experiments on D4RL datasets show that our proposed method can achieve remarkable performance against the common baselines. Furthermore, our method shows good generalization capability when transferring from offline to online.
Cite
Text
Lyu et al. "State Advantage Weighting for Offline RL." NeurIPS 2022 Workshops: Offline_RL, 2022.Markdown
[Lyu et al. "State Advantage Weighting for Offline RL." NeurIPS 2022 Workshops: Offline_RL, 2022.](https://mlanthology.org/neuripsw/2022/lyu2022neuripsw-state/)BibTeX
@inproceedings{lyu2022neuripsw-state,
title = {{State Advantage Weighting for Offline RL}},
author = {Lyu, Jiafei and Gong, Aicheng and Wan, Le and Lu, Zongqing and Li, Xiu},
booktitle = {NeurIPS 2022 Workshops: Offline_RL},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/lyu2022neuripsw-state/}
}