The Courage to Stop: Overcoming Sunk Cost Fallacy in Deep Reinforcement Learning
Abstract
Off-policy deep reinforcement learning (RL) agents typically leverage replay buffers for reusing past experiences during learning. This can help sample efficiency when the collected data is informative and aligned with the learning objectives; when that is not the case, it has the effect of “polluting” the replay buffer with data that can exacerbate optimization challenges in addition to wasting environment interactions due to redundant sampling. We argue that sampling these uninformative and wasteful transitions can be avoided by addressing the sunk cost fallacy which, in the context of deep RL, is the tendency towards continuing an episode until termination. To address this, we propose the learn to stop (LEAST) mechanism which uses statistics based on $Q$-values and gradient to guide early episode termination which helps agents recognize when to terminate unproductive episodes early. We demonstrate that our method improves learning efficiency on a variety of RL algorithms, evaluated on both the MuJoCo and DeepMind Control Suite benchmarks.
Cite
Text
Liu et al. "The Courage to Stop: Overcoming Sunk Cost Fallacy in Deep Reinforcement Learning." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Liu et al. "The Courage to Stop: Overcoming Sunk Cost Fallacy in Deep Reinforcement Learning." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/liu2025icml-courage/)BibTeX
@inproceedings{liu2025icml-courage,
title = {{The Courage to Stop: Overcoming Sunk Cost Fallacy in Deep Reinforcement Learning}},
author = {Liu, Jiashun and Obando-Ceron, Johan and Castro, Pablo Samuel and Courville, Aaron and Pan, Ling},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {39171-39189},
volume = {267},
url = {https://mlanthology.org/icml/2025/liu2025icml-courage/}
}