On the Practical Consistency of Meta-Reinforcement Learning Algorithms
Abstract
Consistency is the theoretical property of a meta learning algorithm that ensures that, under certain assumptions, it can adapt to any task at test time. An open question is whether and how theoretical consistency translates into practice, in comparison to inconsistent algorithms. In this paper, we empirically investigate this question on a set of representative meta-RL algorithms. We find that theoretically consistent algorithms can indeed usually adapt to out-of-distribution (OOD) tasks, while inconsistent ones cannot, although they can still fail in practice for reasons like poor exploration. We further find that theoretically inconsistent algorithms can be made consistent by continuing to update all agent components on the OOD tasks, and adapt as well or better than originally consistent ones. We conclude that theoretical consistency is indeed a desirable property, and inconsistent meta-RL algorithms can easily be made consistent to enjoy the same benefits.
Cite
Text
Xiong et al. "On the Practical Consistency of Meta-Reinforcement Learning Algorithms." NeurIPS 2021 Workshops: MetaLearn, 2021.Markdown
[Xiong et al. "On the Practical Consistency of Meta-Reinforcement Learning Algorithms." NeurIPS 2021 Workshops: MetaLearn, 2021.](https://mlanthology.org/neuripsw/2021/xiong2021neuripsw-practical/)BibTeX
@inproceedings{xiong2021neuripsw-practical,
title = {{On the Practical Consistency of Meta-Reinforcement Learning Algorithms}},
author = {Xiong, Zheng and Zintgraf, Luisa M and Beck, Jacob Austin and Vuorio, Risto and Whiteson, Shimon},
booktitle = {NeurIPS 2021 Workshops: MetaLearn},
year = {2021},
url = {https://mlanthology.org/neuripsw/2021/xiong2021neuripsw-practical/}
}