NAS-Bench-Suite: NAS Evaluation Is (Now) Surprisingly Easy

Abstract

The release of tabular benchmarks, such as NAS-Bench-101 and NAS-Bench-201, has significantly lowered the computational overhead for conducting scientific research in neural architecture search (NAS). Although they have been widely adopted and used to tune real-world NAS algorithms, these benchmarks are limited to small search spaces and focus solely on image classification. Recently, several new NAS benchmarks have been introduced that cover significantly larger search spaces over a wide range of tasks, including object detection, speech recognition, and natural language processing. However, substantial differences among these NAS benchmarks have so far prevented their widespread adoption, limiting researchers to using just a few benchmarks. In this work, we present an in-depth analysis of popular NAS algorithms and performance prediction methods across 25 different combinations of search spaces and datasets, finding that many conclusions drawn from a few NAS benchmarks do \emph{not} generalize to other benchmarks. To help remedy this problem, we introduce \nasbs, a comprehensive and extensible collection of NAS benchmarks, accessible through a unified interface, created with the aim to facilitate reproducible, generalizable, and rapid NAS research. Our code is available at https://github.com/automl/naslib.

Cite

Text

Mehta et al. "NAS-Bench-Suite: NAS Evaluation Is (Now) Surprisingly Easy." International Conference on Learning Representations, 2022.

Markdown

[Mehta et al. "NAS-Bench-Suite: NAS Evaluation Is (Now) Surprisingly Easy." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/mehta2022iclr-nasbenchsuite/)

BibTeX

@inproceedings{mehta2022iclr-nasbenchsuite,
  title     = {{NAS-Bench-Suite: NAS Evaluation Is (Now) Surprisingly Easy}},
  author    = {Mehta, Yash and White, Colin and Zela, Arber and Krishnakumar, Arjun and Zabergja, Guri and Moradian, Shakiba and Safari, Mahmoud and Yu, Kaicheng and Hutter, Frank},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/mehta2022iclr-nasbenchsuite/}
}