Multi-Objective Tree-Structured Parzen Estimator Meets Meta-Learning

Abstract

Hyperparameter optimization (HPO) is essential for the better performance of deep learning, and practitioners often need to consider the trade-off between multiple metrics, such as error rate, latency, memory requirements, robustness, and algorithmic fairness. Due to this demand and the heavy computation of deep learning, the acceleration of multi-objective (MO) optimization becomes ever more important. Although meta-learning has been extensively studied to speedup HPO, existing methods are not applicable to the MO tree-structured parzen estimator (MO-TPE), a simple yet powerful MO HPO algorithm. In this paper, we extend TPE’s acquisition function to the meta-learning setting, using a task similarity defined by the overlap in promising regions of each task. In a comprehensive set of experiments, we demonstrate that our method accelerates MO-TPE on tabular HPO benchmarks and yields state-of-the-art performance. Our method was also validated externally by winning the AutoML 2022 competition on "Multiobjective Hyperparameter Optimization for Transformers".

Cite

Text

Watanabe et al. "Multi-Objective Tree-Structured Parzen Estimator Meets Meta-Learning." NeurIPS 2022 Workshops: MetaLearn, 2022.

Markdown

[Watanabe et al. "Multi-Objective Tree-Structured Parzen Estimator Meets Meta-Learning." NeurIPS 2022 Workshops: MetaLearn, 2022.](https://mlanthology.org/neuripsw/2022/watanabe2022neuripsw-multiobjective/)

BibTeX

@inproceedings{watanabe2022neuripsw-multiobjective,
  title     = {{Multi-Objective Tree-Structured Parzen Estimator Meets Meta-Learning}},
  author    = {Watanabe, Shuhei and Awad, Noor and Onishi, Masaki and Hutter, Frank},
  booktitle = {NeurIPS 2022 Workshops: MetaLearn},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/watanabe2022neuripsw-multiobjective/}
}