Gradient-Based Hyperparameter Optimization Through Reversible Learning
Abstract
Tuning hyperparameters of learning algorithms is hard because gradients are usually unavailable. We compute exact gradients of cross-validation performance with respect to all hyperparameters by chaining derivatives backwards through the entire training procedure. These gradients allow us to optimize thousands of hyperparameters, including step-size and momentum schedules, weight initialization distributions, richly parameterized regularization schemes, and neural network architectures. We compute hyperparameter gradients by exactly reversing the dynamics of stochastic gradient descent with momentum.
Cite
Text
Maclaurin et al. "Gradient-Based Hyperparameter Optimization Through Reversible Learning." International Conference on Machine Learning, 2015.Markdown
[Maclaurin et al. "Gradient-Based Hyperparameter Optimization Through Reversible Learning." International Conference on Machine Learning, 2015.](https://mlanthology.org/icml/2015/maclaurin2015icml-gradientbased/)BibTeX
@inproceedings{maclaurin2015icml-gradientbased,
title = {{Gradient-Based Hyperparameter Optimization Through Reversible Learning}},
author = {Maclaurin, Dougal and Duvenaud, David and Adams, Ryan},
booktitle = {International Conference on Machine Learning},
year = {2015},
pages = {2113-2122},
volume = {37},
url = {https://mlanthology.org/icml/2015/maclaurin2015icml-gradientbased/}
}