Information-Theoretic Task Selection for Meta-Reinforcement Learning
Abstract
In Meta-Reinforcement Learning (meta-RL) an agent is trained on a set of tasks to prepare for and learn faster in new, unseen, but related tasks. The training tasks are usually hand-crafted to be representative of the expected distribution of target tasks and hence all used in training. We show that given a set of training tasks, learning can be both faster and more effective (leading to better performance in the target tasks), if the training tasks are appropriately selected. We propose a task selection algorithm based on information theory, which optimizes the set of tasks used for training in meta-RL, irrespectively of how they are generated. The algorithm establishes which training tasks are both sufficiently relevant for the target tasks, and different enough from one another. We reproduce different meta-RL experiments from the literature and show that our task selection algorithm improves the final performance in all of them.
Cite
Text
Gutierrez and Leonetti. "Information-Theoretic Task Selection for Meta-Reinforcement Learning." Neural Information Processing Systems, 2020.Markdown
[Gutierrez and Leonetti. "Information-Theoretic Task Selection for Meta-Reinforcement Learning." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/gutierrez2020neurips-informationtheoretic/)BibTeX
@inproceedings{gutierrez2020neurips-informationtheoretic,
title = {{Information-Theoretic Task Selection for Meta-Reinforcement Learning}},
author = {Gutierrez, Ricardo Luna and Leonetti, Matteo},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/gutierrez2020neurips-informationtheoretic/}
}