Bootstrapped Reward Shaping

Abstract

In reinforcement learning, especially in sparse-reward domains, many environment steps are required to observe reward information. In order to increase the frequency of such observations, "potential-based reward shaping" (PBRS) has been proposed as a method of providing a more dense reward signal while leaving the optimal policy invariant. However, the required potential function must be carefully designed with task-dependent knowledge to not deter training performance. In this work, we propose a bootstrapped method of reward shaping, termed BS-RS, in which the agent's current estimate of the state-value function acts as the potential function for PBRS. We provide convergence proofs for the tabular setting, give insights into training dynamics for deep RL, and show that the proposed method improves training speed in the Atari suite.

Cite

Text

Adamczyk et al. "Bootstrapped Reward Shaping." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I15.33679

Markdown

[Adamczyk et al. "Bootstrapped Reward Shaping." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/adamczyk2025aaai-bootstrapped/) doi:10.1609/AAAI.V39I15.33679

BibTeX

@inproceedings{adamczyk2025aaai-bootstrapped,
  title     = {{Bootstrapped Reward Shaping}},
  author    = {Adamczyk, Jacob and Makarenko, Volodymyr and Tiomkin, Stas and Kulkarni, Rahul V.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {15302-15310},
  doi       = {10.1609/AAAI.V39I15.33679},
  url       = {https://mlanthology.org/aaai/2025/adamczyk2025aaai-bootstrapped/}
}