Efficient Transfer Learning Method for Automatic Hyperparameter Tuning

Abstract

We propose a fast and effective algorithm for automatic hyperparameter tuning that can generalize across datasets. Our method is an instance of sequential model-based optimization (SMBO) that transfers information by constructing a common response surface for all datasets, similar to Bardenet et al. (2013). The time complexity of reconstructing the response surface at every SMBO iteration in our method is linear in the number of trials (significantly less than previous work with comparable performance), allowing the method to realistically scale to many more datasets. Specifically, we use deviations from the per-dataset mean as the response values. We empirically show the superiority of our method on a large number of synthetic and real-world datasets for tuning hyperparameters of logistic regression and ensembles of classifiers.

Cite

Text

Yogatama and Mann. "Efficient Transfer Learning Method for Automatic Hyperparameter Tuning." International Conference on Artificial Intelligence and Statistics, 2014.

Markdown

[Yogatama and Mann. "Efficient Transfer Learning Method for Automatic Hyperparameter Tuning." International Conference on Artificial Intelligence and Statistics, 2014.](https://mlanthology.org/aistats/2014/yogatama2014aistats-efficient/)

BibTeX

@inproceedings{yogatama2014aistats-efficient,
  title     = {{Efficient Transfer Learning Method for Automatic Hyperparameter Tuning}},
  author    = {Yogatama, Dani and Mann, Gideon},
  booktitle = {International Conference on Artificial Intelligence and Statistics},
  year      = {2014},
  pages     = {1077-1085},
  url       = {https://mlanthology.org/aistats/2014/yogatama2014aistats-efficient/}
}