Universal Value Function Approximators
Abstract
Value functions are a core component of reinforcement learning. The main idea is to to construct a single function approximator V(s; theta) that estimates the long-term reward from any state s, using parameters θ. In this paper we introduce universal value function approximators (UVFAs) V(s,g;theta) that generalise not just over states s but also over goals g. We develop an efficient technique for supervised learning of UVFAs, by factoring observed values into separate embedding vectors for state and goal, and then learning a mapping from s and g to these factored embedding vectors. We show how this technique may be incorporated into a reinforcement learning algorithm that updates the UVFA solely from observed rewards. Finally, we demonstrate that a UVFA can successfully generalise to previously unseen goals.
Cite
Text
Schaul et al. "Universal Value Function Approximators." International Conference on Machine Learning, 2015.Markdown
[Schaul et al. "Universal Value Function Approximators." International Conference on Machine Learning, 2015.](https://mlanthology.org/icml/2015/schaul2015icml-universal/)BibTeX
@inproceedings{schaul2015icml-universal,
title = {{Universal Value Function Approximators}},
author = {Schaul, Tom and Horgan, Daniel and Gregor, Karol and Silver, David},
booktitle = {International Conference on Machine Learning},
year = {2015},
pages = {1312-1320},
volume = {37},
url = {https://mlanthology.org/icml/2015/schaul2015icml-universal/}
}