JAHS-Bench-201: A Foundation for Research on Joint Architecture and Hyperparameter Search
Abstract
The past few years have seen the development of many benchmarks for Neural Architecture Search (NAS), fueling rapid progress in NAS research. However, recent work, which shows that good hyperparameter settings can be more important than using the best architecture, calls for a shift in focus towards Joint Architecture and Hyperparameter Search (JAHS). Therefore, we present JAHS-Bench-201, the first collection of surrogate benchmarks for JAHS, built to also facilitate research on multi-objective, cost-aware and (multi) multi-fidelity optimization algorithms. To the best of our knowledge, JAHS-Bench-201 is based on the most extensive dataset of neural network performance data in the public domain. It is composed of approximately 161 million data points and 20 performance metrics for three deep learning tasks, while featuring a 14-dimensional search and fidelity space that extends the popular NAS-Bench-201 space. With JAHS-Bench-201, we hope to democratize research on JAHS and lower the barrier to entry of an extremely compute intensive field, e.g., by reducing the compute time to run a JAHS algorithm from 5 days to only a few seconds.
Cite
Text
Bansal et al. "JAHS-Bench-201: A Foundation for Research on Joint Architecture and Hyperparameter Search." Neural Information Processing Systems, 2022.Markdown
[Bansal et al. "JAHS-Bench-201: A Foundation for Research on Joint Architecture and Hyperparameter Search." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/bansal2022neurips-jahsbench201/)BibTeX
@inproceedings{bansal2022neurips-jahsbench201,
title = {{JAHS-Bench-201: A Foundation for Research on Joint Architecture and Hyperparameter Search}},
author = {Bansal, Archit and Stoll, Danny and Janowski, Maciej and Zela, Arber and Hutter, Frank},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/bansal2022neurips-jahsbench201/}
}