KNAS: Green Neural Architecture Search
Abstract
Many existing neural architecture search (NAS) solutions rely on downstream training for architecture evaluation, which takes enormous computations. Considering that these computations bring a large carbon footprint, this paper aims to explore a green (namely environmental-friendly) NAS solution that evaluates architectures without training. Intuitively, gradients, induced by the architecture itself, directly decide the convergence and generalization results. It motivates us to propose the gradient kernel hypothesis: Gradients can be used as a coarse-grained proxy of downstream training to evaluate random-initialized networks. To support the hypothesis, we conduct a theoretical analysis and find a practical gradient kernel that has good correlations with training loss and validation performance. According to this hypothesis, we propose a new kernel based architecture search approach KNAS. Experiments show that KNAS achieves competitive results with orders of magnitude faster than “train-then-test” paradigms on image classification tasks. Furthermore, the extremely low search cost enables its wide applications. The searched network also outperforms strong baseline RoBERTA-large on two text classification tasks.
Cite
Text
Xu et al. "KNAS: Green Neural Architecture Search." International Conference on Machine Learning, 2021.Markdown
[Xu et al. "KNAS: Green Neural Architecture Search." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/xu2021icml-knas/)BibTeX
@inproceedings{xu2021icml-knas,
title = {{KNAS: Green Neural Architecture Search}},
author = {Xu, Jingjing and Zhao, Liang and Lin, Junyang and Gao, Rundong and Sun, Xu and Yang, Hongxia},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {11613-11625},
volume = {139},
url = {https://mlanthology.org/icml/2021/xu2021icml-knas/}
}