TA-GATES: An Encoding Scheme for Neural Network Architectures
Abstract
Neural architecture search tries to shift the manual design of neural network (NN) architectures to algorithmic design. In these cases, the NN architecture itself can be viewed as data and needs to be modeled. A better modeling could help explore novel architectures automatically and open the black box of automated architecture design. To this end, this work proposes a new encoding scheme for neural architectures, the Training-Analogous Graph-based ArchiTecture Encoding Scheme (TA-GATES). TA-GATES encodes an NN architecture in a way that is analogous to its training. Extensive experiments demonstrate that the flexibility and discriminative power of TA-GATES lead to better modeling of NN architectures. We expect our methodology of explicitly modeling the NN training process to benefit broader automated deep learning systems. The code is available at https://github.com/walkerning/aw_nas.
Cite
Text
Ning et al. "TA-GATES: An Encoding Scheme for Neural Network Architectures." Neural Information Processing Systems, 2022.Markdown
[Ning et al. "TA-GATES: An Encoding Scheme for Neural Network Architectures." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/ning2022neurips-tagates/)BibTeX
@inproceedings{ning2022neurips-tagates,
title = {{TA-GATES: An Encoding Scheme for Neural Network Architectures}},
author = {Ning, Xuefei and Zhou, Zixuan and Zhao, Junbo and Zhao, Tianchen and Deng, Yiping and Tang, Changcheng and Liang, Shuang and Yang, Huazhong and Wang, Yu},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/ning2022neurips-tagates/}
}