A Case for Validation Buffer in Pessimistic Actor-Critic

Abstract

In this paper, we investigate the issue of error accumulation in critic networks updated via pessimistic temporal difference objectives. We show that the critic approximation error can be approximated via a recursive fixed-point model similar to that of Bellman value. We use such recursive definition to retrieve the conditions under which the pessimistic critic is unbiased. Building on these insights, we propose Validation Pessimism Learning (VPL) algorithm. VPL uses a small validation buffer to adjust the levels of pessimism throughout the agent training, with the pessimism set such that the approximation error of the critic targets is minimized. We investigate the proposed approach on a variety of locomotion and manipulation tasks and report improvements in sample efficiency and performance.

Cite

Text

Nauman et al. "A Case for Validation Buffer in Pessimistic Actor-Critic." ICML 2024 Workshops: ARLET, 2024.

Markdown

[Nauman et al. "A Case for Validation Buffer in Pessimistic Actor-Critic." ICML 2024 Workshops: ARLET, 2024.](https://mlanthology.org/icmlw/2024/nauman2024icmlw-case/)

BibTeX

@inproceedings{nauman2024icmlw-case,
  title     = {{A Case for Validation Buffer in Pessimistic Actor-Critic}},
  author    = {Nauman, Michal and Ostaszewski, Mateusz and Cygan, Marek},
  booktitle = {ICML 2024 Workshops: ARLET},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/nauman2024icmlw-case/}
}