Bootstrapped Meta-Learning
Abstract
We propose an algorithm for meta-optimization that lets the meta-learner teach itself. The algorithm first bootstraps a target from the meta-learner, then optimises the meta-learner by minimising the distance to that target under some loss. Focusing on meta-learning with gradients, we establish conditions that guarantee performance improvements and show that the improvement is related to the target distance. Thus, by controlling curvature, the distance measure can be used to ease meta-optimization. Further, the bootstrapping mechanism can extend the effective meta-learning horizon without requiring backpropagation through all updates. The algorithm is versatile and easy to implement. We achieve a new state-of-the art for model-free agents on the Atari ALE benchmark, improve upon MAML in few-shot learning, and demonstrate how our approach opens up new possibilities by meta-learning efficient exploration in an epsilon-greedy Q-learning agent.
Cite
Text
Flennerhag et al. "Bootstrapped Meta-Learning." NeurIPS 2021 Workshops: MetaLearn, 2021.Markdown
[Flennerhag et al. "Bootstrapped Meta-Learning." NeurIPS 2021 Workshops: MetaLearn, 2021.](https://mlanthology.org/neuripsw/2021/flennerhag2021neuripsw-bootstrapped/)BibTeX
@inproceedings{flennerhag2021neuripsw-bootstrapped,
title = {{Bootstrapped Meta-Learning}},
author = {Flennerhag, Sebastian and Schroecker, Yannick and Zahavy, Tom and van Hasselt, Hado and Silver, David and Singh, Satinder},
booktitle = {NeurIPS 2021 Workshops: MetaLearn},
year = {2021},
url = {https://mlanthology.org/neuripsw/2021/flennerhag2021neuripsw-bootstrapped/}
}