Learning Long-Term Reward Redistribution via Randomized Return Decomposition

Abstract

Many practical applications of reinforcement learning require agents to learn from sparse and delayed rewards. It challenges the ability of agents to attribute their actions to future outcomes. In this paper, we consider the problem formulation of episodic reinforcement learning with trajectory feedback. It refers to an extreme delay of reward signals, in which the agent can only obtain one reward signal at the end of each trajectory. A popular paradigm for this problem setting is learning with a designed auxiliary dense reward function, namely proxy reward, instead of sparse environmental signals. Based on this framework, this paper proposes a novel reward redistribution algorithm, randomized return decomposition (RRD), to learn a proxy reward function for episodic reinforcement learning. We establish a surrogate problem by Monte-Carlo sampling that scales up least-squares-based reward redistribution to long-horizon problems. We analyze our surrogate loss function by connection with existing methods in the literature, which illustrates the algorithmic properties of our approach. In experiments, we extensively evaluate our proposed method on a variety of benchmark tasks with episodic rewards and demonstrate substantial improvement over baseline algorithms.

Cite

Text

Ren et al. "Learning Long-Term Reward Redistribution via Randomized Return Decomposition." International Conference on Learning Representations, 2022.

Markdown

[Ren et al. "Learning Long-Term Reward Redistribution via Randomized Return Decomposition." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/ren2022iclr-learning/)

BibTeX

@inproceedings{ren2022iclr-learning,
  title     = {{Learning Long-Term Reward Redistribution via Randomized Return Decomposition}},
  author    = {Ren, Zhizhou and Guo, Ruihan and Zhou, Yuan and Peng, Jian},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/ren2022iclr-learning/}
}