Penalizing Infeasible Actions and Reward Scaling in Reinforcement Learning with Offline Data
Abstract
Reinforcement learning with offline data suffers from Q-value extrapolation errors. To address this issue, we first demonstrate that linear extrapolation of the Q-function beyond the data range is particularly problematic. To mitigate this, we propose guiding the gradual decrease of Q-values outside the data range, which is achieved through reward scaling with layer normalization (RS-LN) and a penalization mechanism for infeasible actions (PA). By combining RS-LN and PA, we develop a new algorithm called PARS. We evaluate PARS across a range of tasks, demonstrating superior performance compared to state-of-the-art algorithms in both offline training and online fine-tuning on the D4RL benchmark, with notable success in the challenging AntMaze Ultra task.
Cite
Text
Kim et al. "Penalizing Infeasible Actions and Reward Scaling in Reinforcement Learning with Offline Data." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Kim et al. "Penalizing Infeasible Actions and Reward Scaling in Reinforcement Learning with Offline Data." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/kim2025icml-penalizing/)BibTeX
@inproceedings{kim2025icml-penalizing,
title = {{Penalizing Infeasible Actions and Reward Scaling in Reinforcement Learning with Offline Data}},
author = {Kim, Jeonghye and Shin, Yongjae and Jung, Whiyoung and Hong, Sunghoon and Yoon, Deunsol and Sung, Youngchul and Lee, Kanghoon and Lim, Woohyung},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {30769-30790},
volume = {267},
url = {https://mlanthology.org/icml/2025/kim2025icml-penalizing/}
}