NAS-Bench-101: Towards Reproducible Neural Architecture Search
Abstract
Recent advances in neural architecture search (NAS) demand tremendous computational resources, which makes it difficult to reproduce experiments and imposes a barrier-to-entry to researchers without access to large-scale computation. We aim to ameliorate these problems by introducing NAS-Bench-101, the first public architecture dataset for NAS research. To build NAS-Bench-101, we carefully constructed a compact, yet expressive, search space, exploiting graph isomorphisms to identify 423k unique convolutional architectures. We trained and evaluated all of these architectures multiple times on CIFAR-10 and compiled the results into a large dataset of over 5 million trained models. This allows researchers to evaluate the quality of a diverse range of models in milliseconds by querying the pre-computed dataset. We demonstrate its utility by analyzing the dataset as a whole and by benchmarking a range of architecture optimization algorithms.
Cite
Text
Ying et al. "NAS-Bench-101: Towards Reproducible Neural Architecture Search." International Conference on Machine Learning, 2019.Markdown
[Ying et al. "NAS-Bench-101: Towards Reproducible Neural Architecture Search." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/ying2019icml-nasbench101/)BibTeX
@inproceedings{ying2019icml-nasbench101,
title = {{NAS-Bench-101: Towards Reproducible Neural Architecture Search}},
author = {Ying, Chris and Klein, Aaron and Christiansen, Eric and Real, Esteban and Murphy, Kevin and Hutter, Frank},
booktitle = {International Conference on Machine Learning},
year = {2019},
pages = {7105-7114},
volume = {97},
url = {https://mlanthology.org/icml/2019/ying2019icml-nasbench101/}
}