Compatible Gradient Approximations for Actor-Critic Algorithms
Abstract
Deterministic policy gradient algorithms are foundational for actor-critic methods in controlling continuous systems, yet they often encounter inaccuracies due to their dependence on the derivative of the critic's value estimates with respect to input actions. This reliance requires precise action-value gradient computations, a task that proves challenging under function approximation. We introduce an actor-critic algorithm that bypasses the need for such precision by employing a zeroth-order approximation of the action-value gradient through two-point stochastic gradient estimation within the action space. This approach provably and effectively addresses compatibility issues inherent in deterministic policy gradient schemes. Empirical results further demonstrate that our algorithm not only matches but frequently exceeds the performance of current state-of-the-art methods.
Cite
Text
Saglam and Kalogerias. "Compatible Gradient Approximations for Actor-Critic Algorithms." ICML 2024 Workshops: RLControlTheory, 2024.Markdown
[Saglam and Kalogerias. "Compatible Gradient Approximations for Actor-Critic Algorithms." ICML 2024 Workshops: RLControlTheory, 2024.](https://mlanthology.org/icmlw/2024/saglam2024icmlw-compatible/)BibTeX
@inproceedings{saglam2024icmlw-compatible,
title = {{Compatible Gradient Approximations for Actor-Critic Algorithms}},
author = {Saglam, Baturay and Kalogerias, Dionysis},
booktitle = {ICML 2024 Workshops: RLControlTheory},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/saglam2024icmlw-compatible/}
}