Hybrid Reward Architecture for Reinforcement Learning
Abstract
One of the main challenges in reinforcement learning (RL) is generalisation. In typical deep RL methods this is achieved by approximating the optimal value function with a low-dimensional representation using a deep network. While this approach works well in many domains, in domains where the optimal value function cannot easily be reduced to a low-dimensional representation, learning can be very slow and unstable. This paper contributes towards tackling such challenging domains, by proposing a new method, called Hybrid Reward Architecture (HRA). HRA takes as input a decomposed reward function and learns a separate value function for each component reward function. Because each component typically only depends on a subset of all features, the corresponding value function can be approximated more easily by a low-dimensional representation, enabling more effective learning. We demonstrate HRA on a toy-problem and the Atari game Ms. Pac-Man, where HRA achieves above-human performance.
Cite
Text
Van Seijen et al. "Hybrid Reward Architecture for Reinforcement Learning." Neural Information Processing Systems, 2017.Markdown
[Van Seijen et al. "Hybrid Reward Architecture for Reinforcement Learning." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/seijen2017neurips-hybrid/)BibTeX
@inproceedings{seijen2017neurips-hybrid,
title = {{Hybrid Reward Architecture for Reinforcement Learning}},
author = {Van Seijen, Harm and Fatemi, Mehdi and Romoff, Joshua and Laroche, Romain and Barnes, Tavian and Tsang, Jeffrey},
booktitle = {Neural Information Processing Systems},
year = {2017},
pages = {5392-5402},
url = {https://mlanthology.org/neurips/2017/seijen2017neurips-hybrid/}
}