Meta-Learning with Warped Gradient Descent

Abstract

Learning an efficient update rule from data that promotes rapid learning of new tasks from the same distribution remains an open problem in meta-learning. Typically, previous works have approached this issue either by attempting to train a neural network that directly produces updates or by attempting to learn better initialisations or scaling factors for a gradient-based update rule. Both of these approaches pose challenges. On one hand, directly producing an update forgoes a useful inductive bias and can easily lead to non-converging behaviour. On the other hand, approaches that try to control a gradient-based update rule typically resort to computing gradients through the learning process to obtain their meta-gradients, leading to methods that can not scale beyond few-shot task adaptation. In this work, we propose Warped Gradient Descent (WarpGrad), a method that intersects these approaches to mitigate their limitations. WarpGrad meta-learns an efficiently parameterised preconditioning matrix that facilitates gradient descent across the task distribution. Preconditioning arises by interleaving non-linear layers, referred to as warp-layers, between the layers of a task-learner. Warp-layers are meta-learned without backpropagating through the task training process in a manner similar to methods that learn to directly produce updates. WarpGrad is computationally efficient, easy to implement, and can scale to arbitrarily large meta-learning problems. We provide a geometrical interpretation of the approach and evaluate its effectiveness in a variety of settings, including few-shot, standard supervised, continual and reinforcement learning.

Cite

Text

Flennerhag et al. "Meta-Learning with Warped Gradient Descent." International Conference on Learning Representations, 2020.

Markdown

[Flennerhag et al. "Meta-Learning with Warped Gradient Descent." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/flennerhag2020iclr-metalearning/)

BibTeX

@inproceedings{flennerhag2020iclr-metalearning,
  title     = {{Meta-Learning with Warped Gradient Descent}},
  author    = {Flennerhag, Sebastian and Rusu, Andrei A. and Pascanu, Razvan and Visin, Francesco and Yin, Hujun and Hadsell, Raia},
  booktitle = {International Conference on Learning Representations},
  year      = {2020},
  url       = {https://mlanthology.org/iclr/2020/flennerhag2020iclr-metalearning/}
}