Learning to Learn Without Gradient Descent by Gradient Descent
Abstract
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
Cite
Text
Chen et al. "Learning to Learn Without Gradient Descent by Gradient Descent." International Conference on Machine Learning, 2017.Markdown
[Chen et al. "Learning to Learn Without Gradient Descent by Gradient Descent." International Conference on Machine Learning, 2017.](https://mlanthology.org/icml/2017/chen2017icml-learning/)BibTeX
@inproceedings{chen2017icml-learning,
title = {{Learning to Learn Without Gradient Descent by Gradient Descent}},
author = {Chen, Yutian and Hoffman, Matthew W. and Colmenarejo, Sergio Gómez and Denil, Misha and Lillicrap, Timothy P. and Botvinick, Matt and Freitas, Nando},
booktitle = {International Conference on Machine Learning},
year = {2017},
pages = {748-756},
volume = {70},
url = {https://mlanthology.org/icml/2017/chen2017icml-learning/}
}