Hessian Aided Policy Gradient

Abstract

Reducing the variance of estimators for policy gradient has long been the focus of reinforcement learning research. While classic algorithms like REINFORCE find an $\epsilon$-approximate first-order stationary point in $\OM({1}/{\epsilon^4})$ random trajectory simulations, no provable improvement on the complexity has been made so far. This paper presents a Hessian aided policy gradient method with the first improved sample complexity of $\OM({1}/{\epsilon^3})$. While our method exploits information from the policy Hessian, it can be implemented in linear time with respect to the parameter dimension and is hence applicable to sophisticated DNN parameterization. Simulations on standard tasks validate the efficiency of our method.

Cite

Text

Shen et al. "Hessian Aided Policy Gradient." International Conference on Machine Learning, 2019.

Markdown

[Shen et al. "Hessian Aided Policy Gradient." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/shen2019icml-hessian/)

BibTeX

@inproceedings{shen2019icml-hessian,
  title     = {{Hessian Aided Policy Gradient}},
  author    = {Shen, Zebang and Ribeiro, Alejandro and Hassani, Hamed and Qian, Hui and Mi, Chao},
  booktitle = {International Conference on Machine Learning},
  year      = {2019},
  pages     = {5729-5738},
  volume    = {97},
  url       = {https://mlanthology.org/icml/2019/shen2019icml-hessian/}
}