Optimistic Meta-Gradients
Abstract
We study the connection between gradient-based meta-learning and convex optimisation. We observe that gradient descent with momentum is as a special case of meta-gradients, and building on recent results in optimisation, we prove convergence rates for meta-learning in the single task setting. While a meta-learned update rule can yield faster convergence up to constant factor,it is not sufficient for acceleration. Instead, some form of optimism is required. We show that optimism in meta-learning can be captured through the recently proposed Bootstrapped Meta-Gradient method, providing deeper insight into its underlying mechanics.
Cite
Text
Flennerhag et al. "Optimistic Meta-Gradients." NeurIPS 2022 Workshops: MetaLearn, 2022.Markdown
[Flennerhag et al. "Optimistic Meta-Gradients." NeurIPS 2022 Workshops: MetaLearn, 2022.](https://mlanthology.org/neuripsw/2022/flennerhag2022neuripsw-optimistic/)BibTeX
@inproceedings{flennerhag2022neuripsw-optimistic,
title = {{Optimistic Meta-Gradients}},
author = {Flennerhag, Sebastian and Zahavy, Tom and O'Donoghue, Brendan and van Hasselt, Hado and György, András and Singh, Satinder},
booktitle = {NeurIPS 2022 Workshops: MetaLearn},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/flennerhag2022neuripsw-optimistic/}
}