Bias in Natural Actor-Critic Algorithms

Abstract

We show that several popular discounted reward natural actor-critics, including the popular NAC-LSTD and eNAC algorithms, do not generate unbiased estimates of the natural policy gradient as claimed. We derive the first unbiased discounted reward natural actor-critics using batch and iterative approaches to gradient estimation. We argue that the bias makes the existing algorithms more appropriate for the average reward setting. We also show that, when Sarsa(lambda) is guaranteed to converge to an optimal policy, the objective function used by natural actor-critics is concave, so policy gradient methods are guaranteed to converge to globally optimal policies as well.

Cite

Text

Thomas. "Bias in Natural Actor-Critic Algorithms." International Conference on Machine Learning, 2014.

Markdown

[Thomas. "Bias in Natural Actor-Critic Algorithms." International Conference on Machine Learning, 2014.](https://mlanthology.org/icml/2014/thomas2014icml-bias/)

BibTeX

@inproceedings{thomas2014icml-bias,
  title     = {{Bias in Natural Actor-Critic Algorithms}},
  author    = {Thomas, Philip},
  booktitle = {International Conference on Machine Learning},
  year      = {2014},
  pages     = {441-448},
  volume    = {32},
  url       = {https://mlanthology.org/icml/2014/thomas2014icml-bias/}
}