Deterministic Policy Gradient Algorithms

Abstract

In this paper we consider deterministic policy gradient algorithms for reinforcement learning with continuous actions. The deterministic policy gradient has a particularly appealing form: it is the expected gradient of the action-value function. This simple form means that the deterministic policy gradient can be estimated much more efficiently than the usual stochastic policy gradient. To ensure adequate exploration, we introduce an off-policy actor-critic algorithm that learns a deterministic target policy from an exploratory behaviour policy. Deterministic policy gradient algorithms outperformed their stochastic counterparts in several benchmark problems, particularly in high-dimensional action spaces.

Cite

Text

Silver et al. "Deterministic Policy Gradient Algorithms." International Conference on Machine Learning, 2014.

Markdown

[Silver et al. "Deterministic Policy Gradient Algorithms." International Conference on Machine Learning, 2014.](https://mlanthology.org/icml/2014/silver2014icml-deterministic/)

BibTeX

@inproceedings{silver2014icml-deterministic,
  title     = {{Deterministic Policy Gradient Algorithms}},
  author    = {Silver, David and Lever, Guy and Heess, Nicolas and Degris, Thomas and Wierstra, Daan and Riedmiller, Martin},
  booktitle = {International Conference on Machine Learning},
  year      = {2014},
  pages     = {387-395},
  volume    = {32},
  url       = {https://mlanthology.org/icml/2014/silver2014icml-deterministic/}
}