Particle Value Functions

Abstract

The policy gradients of the expected return objective can react slowly to rare rewards. Yet, in some cases agents may wish to emphasize the low or high returns regardless of their probability. Borrowing from the economics and control literature, we review the risk-sensitive value function that arises from an exponential utility and illustrate its effects on an example. This risk-sensitive value function is not always applicable to reinforcement learning problems, so we introduce the particle value function defined by a particle filter over the distributions of an agent's experience, which bounds the risk-sensitive one. We illustrate the benefit of the policy gradients of this objective in Cliffworld.

Cite

Text

Maddison et al. "Particle Value Functions." International Conference on Learning Representations, 2017.

Markdown

[Maddison et al. "Particle Value Functions." International Conference on Learning Representations, 2017.](https://mlanthology.org/iclr/2017/maddison2017iclr-particle/)

BibTeX

@inproceedings{maddison2017iclr-particle,
  title     = {{Particle Value Functions}},
  author    = {Maddison, Chris J. and Lawson, Dieterich and Tucker, George and Heess, Nicolas and Doucet, Arnaud and Mnih, Andriy and Teh, Yee Whye},
  booktitle = {International Conference on Learning Representations},
  year      = {2017},
  url       = {https://mlanthology.org/iclr/2017/maddison2017iclr-particle/}
}