Language Representations for Generalization in Reinforcement Learning
Abstract
The choice of state and action representation in Reinforcement Learning (RL) has a significant effect on agent performance for the training task. But its relationship with generalization to new tasks is under-explored. One approach to improving generalization investigated here is the use of language as a representation. We compare vector-states and discrete-actions to language representations. We find the agents using language representations generalize better and could solve tasks with more entities, new entities, and more complexity than seen in the training task. We attribute this to the compositionality of language.
Cite
Text
Goodger et al. "Language Representations for Generalization in Reinforcement Learning." Proceedings of The 13th Asian Conference on Machine Learning, 2021.Markdown
[Goodger et al. "Language Representations for Generalization in Reinforcement Learning." Proceedings of The 13th Asian Conference on Machine Learning, 2021.](https://mlanthology.org/acml/2021/goodger2021acml-language/)BibTeX
@inproceedings{goodger2021acml-language,
title = {{Language Representations for Generalization in Reinforcement Learning}},
author = {Goodger, Nikolaj and Vamplew, Peter and Foale, Cameron and Dazeley, Richard},
booktitle = {Proceedings of The 13th Asian Conference on Machine Learning},
year = {2021},
pages = {390-405},
volume = {157},
url = {https://mlanthology.org/acml/2021/goodger2021acml-language/}
}