Graph Masked Autoencoder Enhanced Predictor for Neural Architecture Search
Abstract
Performance estimation of neural architecture is a crucial component of neural architecture search (NAS). Meanwhile, neural predictor is a current mainstream performance estimation method. However, it is a challenging task to train the predictor with few architecture evaluations for efficient NAS. In this paper, we propose a graph masked autoencoder (GMAE) enhanced predictor, which can reduce the dependence on supervision data by self-supervised pre-training with untrained architectures. We compare our GMAE-enhanced predictor with existing predictors in different search spaces, and experimental results show that our predictor has high query utilization. Moreover, GMAE-enhanced predictor with different search strategies can discover competitive architectures in different search spaces. Code and supplementary materials are available at https://github.com/kunjing96/GMAENAS.git.
Cite
Text
Jing et al. "Graph Masked Autoencoder Enhanced Predictor for Neural Architecture Search." International Joint Conference on Artificial Intelligence, 2022. doi:10.24963/IJCAI.2022/432Markdown
[Jing et al. "Graph Masked Autoencoder Enhanced Predictor for Neural Architecture Search." International Joint Conference on Artificial Intelligence, 2022.](https://mlanthology.org/ijcai/2022/jing2022ijcai-graph/) doi:10.24963/IJCAI.2022/432BibTeX
@inproceedings{jing2022ijcai-graph,
title = {{Graph Masked Autoencoder Enhanced Predictor for Neural Architecture Search}},
author = {Jing, Kun and Xu, Jungang and Li, Pengfei},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2022},
pages = {3114-3120},
doi = {10.24963/IJCAI.2022/432},
url = {https://mlanthology.org/ijcai/2022/jing2022ijcai-graph/}
}