PreNAS: Preferred One-Shot Learning Towards Efficient Neural Architecture Search

Abstract

The wide application of pre-trained models is driving the trend of once-for-all training in one-shot neural architecture search (NAS). However, training within a huge sample space damages the performance of individual subnets and requires much computation to search for a optimal model. In this paper, we present PreNAS, a search-free NAS approach that accentuates target models in one-shot training. Specifically, the sample space is dramatically reduced in advance by a zero-cost selector, and weight-sharing one-shot training is performed on the preferred architectures to alleviate update conflicts. Extensive experiments have demonstrated that PreNAS consistently outperforms state-of-the-art one-shot NAS competitors for both Vision Transformer and convolutional architectures, and importantly, enables instant specialization with zero search cost. Our code is available at https://github.com/tinyvision/PreNAS.

Cite

Text

Wang et al. "PreNAS: Preferred One-Shot Learning Towards Efficient Neural Architecture Search." International Conference on Machine Learning, 2023.

Markdown

[Wang et al. "PreNAS: Preferred One-Shot Learning Towards Efficient Neural Architecture Search." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/wang2023icml-prenas/)

BibTeX

@inproceedings{wang2023icml-prenas,
  title     = {{PreNAS: Preferred One-Shot Learning Towards Efficient Neural Architecture Search}},
  author    = {Wang, Haibin and Ge, Ce and Chen, Hesen and Sun, Xiuyu},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {35642-35654},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/wang2023icml-prenas/}
}