Online Hyperparameter Meta-Learning with Hypergradient Distillation
Abstract
Many gradient-based meta-learning methods assume a set of parameters that do not participate in inner-optimization, which can be considered as hyperparameters. Although such hyperparameters can be optimized using the existing gradient-based hyperparameter optimization (HO) methods, they suffer from the following issues. Unrolled differentiation methods do not scale well to high-dimensional hyperparameters or horizon length, Implicit Function Theorem (IFT) based methods are restrictive for online optimization, and short horizon approximations suffer from short horizon bias. In this work, we propose a novel HO method that can overcome these limitations, by approximating the second-order term with knowledge distillation. Specifically, we parameterize a single Jacobian-vector product (JVP) for each HO step and minimize the distance from the true second-order term. Our method allows online optimization and also is scalable to the hyperparameter dimension and the horizon length. We demonstrate the effectiveness of our method on three different meta-learning methods and two benchmark datasets.
Cite
Text
Lee et al. "Online Hyperparameter Meta-Learning with Hypergradient Distillation." International Conference on Learning Representations, 2022.Markdown
[Lee et al. "Online Hyperparameter Meta-Learning with Hypergradient Distillation." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/lee2022iclr-online/)BibTeX
@inproceedings{lee2022iclr-online,
title = {{Online Hyperparameter Meta-Learning with Hypergradient Distillation}},
author = {Lee, Hae Beom and Lee, Hayeon and Shin, JaeWoong and Yang, Eunho and Hospedales, Timothy and Hwang, Sung Ju},
booktitle = {International Conference on Learning Representations},
year = {2022},
url = {https://mlanthology.org/iclr/2022/lee2022iclr-online/}
}