Marthe: Scheduling the Learning Rate via Online Hypergradients

Abstract

We study the problem of fitting task-specific learning rate schedules from the perspective of hyperparameter optimization, aiming at good generalization. We describe the structure of the gradient of a validation error w.r.t. the learning rate schedule -- the hypergradient. Based on this, we introduce MARTHE, a novel online algorithm guided by cheap approximations of the hypergradient that uses past information from the optimization trajectory to simulate future behaviour. It interpolates between two recent techniques, RTHO (Franceschi et al., 2017) and HD (Baydin et al. 2018), and is able to produce learning rate schedules that are more stable leading to models that generalize better.

Cite

Text

Donini et al. "Marthe: Scheduling the Learning Rate via Online Hypergradients." International Joint Conference on Artificial Intelligence, 2020. doi:10.24963/IJCAI.2020/293

Markdown

[Donini et al. "Marthe: Scheduling the Learning Rate via Online Hypergradients." International Joint Conference on Artificial Intelligence, 2020.](https://mlanthology.org/ijcai/2020/donini2020ijcai-marthe/) doi:10.24963/IJCAI.2020/293

BibTeX

@inproceedings{donini2020ijcai-marthe,
  title     = {{Marthe: Scheduling the Learning Rate via Online Hypergradients}},
  author    = {Donini, Michele and Franceschi, Luca and Majumder, Orchid and Pontil, Massimiliano and Frasconi, Paolo},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {2119-2125},
  doi       = {10.24963/IJCAI.2020/293},
  url       = {https://mlanthology.org/ijcai/2020/donini2020ijcai-marthe/}
}