EvoGrad: Efficient Gradient-Based Meta-Learning and Hyperparameter Optimization

Abstract

Gradient-based meta-learning and hyperparameter optimization have seen significant progress recently, enabling practical end-to-end training of neural networks together with many hyperparameters. Nevertheless, existing approaches are relatively expensive as they need to compute second-order derivatives and store a longer computational graph. This cost prevents scaling them to larger network architectures. We present EvoGrad, a new approach to meta-learning that draws upon evolutionary techniques to more efficiently compute hypergradients. EvoGrad estimates hypergradient with respect to hyperparameters without calculating second-order gradients, or storing a longer computational graph, leading to significant improvements in efficiency. We evaluate EvoGrad on three substantial recent meta-learning applications, namely cross-domain few-shot learning with feature-wise transformations, noisy label learning with Meta-Weight-Net and low-resource cross-lingual learning with meta representation transformation. The results show that EvoGrad significantly improves efficiency and enables scaling meta-learning to bigger architectures such as from ResNet10 to ResNet34.

Cite

Text

Bohdal et al. "EvoGrad: Efficient Gradient-Based Meta-Learning and Hyperparameter Optimization." Neural Information Processing Systems, 2021.

Markdown

[Bohdal et al. "EvoGrad: Efficient Gradient-Based Meta-Learning and Hyperparameter Optimization." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/bohdal2021neurips-evograd/)

BibTeX

@inproceedings{bohdal2021neurips-evograd,
  title     = {{EvoGrad: Efficient Gradient-Based Meta-Learning and Hyperparameter Optimization}},
  author    = {Bohdal, Ondrej and Yang, Yongxin and Hospedales, Timothy},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/bohdal2021neurips-evograd/}
}