Loaded DiCE: Trading Off Bias and Variance in Any-Order Score Function Gradient Estimators for Reinforcement Learning

Abstract

Gradient-based methods for optimisation of objectives in stochastic settings with unknown or intractable dynamics require estimators of derivatives. We derive an objective that, under automatic differentiation, produces low-variance unbiased estimators of derivatives at any order. Our objective is compatible with arbitrary advantage estimators, which allows the control of the bias and variance of any-order derivatives when using function approximation. Furthermore, we propose a method to trade off bias and variance of higher order derivatives by discounting the impact of more distant causal dependencies. We demonstrate the correctness and utility of our estimator in analytically tractable MDPs and in meta-reinforcement-learning for continuous control.

Cite

Text

Farquhar et al. "Loaded DiCE: Trading Off Bias and Variance in Any-Order Score Function Gradient Estimators for Reinforcement Learning." Neural Information Processing Systems, 2019.

Markdown

[Farquhar et al. "Loaded DiCE: Trading Off Bias and Variance in Any-Order Score Function Gradient Estimators for Reinforcement Learning." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/farquhar2019neurips-loaded/)

BibTeX

@inproceedings{farquhar2019neurips-loaded,
  title     = {{Loaded DiCE: Trading Off Bias and Variance in Any-Order Score Function Gradient Estimators for Reinforcement Learning}},
  author    = {Farquhar, Gregory and Whiteson, Shimon and Foerster, Jakob},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {8151-8162},
  url       = {https://mlanthology.org/neurips/2019/farquhar2019neurips-loaded/}
}