A Case for Validation Buffer in Pessimistic Actor-Critic
Abstract
In this paper, we investigate the issue of error accumulation in critic networks updated via pessimistic temporal difference objectives. We show that the critic approximation error can be approximated via a recursive fixed-point model similar to that of the Bellman value. We use such recursive definition to retrieve the conditions under which the pessimistic critic is unbiased. Building on these insights, we propose Validation Pessimism Learning (VPL) algorithm. VPL uses a small validation buffer to adjust the levels of pessimism throughout the agent training, with the pessimism set such that the approximation error of the critic targets is minimized. We investigate the proposed approach on a variety of locomotion and manipulation tasks and report improvements in sample efficiency and performance.
Cite
Text
Nauman et al. "A Case for Validation Buffer in Pessimistic Actor-Critic." International Joint Conference on Artificial Intelligence, 2025. doi:10.24963/IJCAI.2025/665Markdown
[Nauman et al. "A Case for Validation Buffer in Pessimistic Actor-Critic." International Joint Conference on Artificial Intelligence, 2025.](https://mlanthology.org/ijcai/2025/nauman2025ijcai-case/) doi:10.24963/IJCAI.2025/665BibTeX
@inproceedings{nauman2025ijcai-case,
title = {{A Case for Validation Buffer in Pessimistic Actor-Critic}},
author = {Nauman, Michal and Ostaszewski, Mateusz and Cygan, Marek},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2025},
pages = {5976-5984},
doi = {10.24963/IJCAI.2025/665},
url = {https://mlanthology.org/ijcai/2025/nauman2025ijcai-case/}
}