FOX-NAS: Fast, On-Device and Explainable Neural Architecture Search

Abstract

Neural architecture search can discover neural networks with good performance, and One-Shot approaches are prevalent. One-Shot approaches typically require a supernet with weight sharing and predictors that predict the performance of architecture. However, the previous methods take much time to generate performance predictors thus are inefficient. To this end, we propose FOX-NAS that consists of fast and explainable predictors based on simulated annealing and multivariate regression. Our method is quantization-friendly and can be efficiently deployed to the edge. The experiments on different hardware show that FOX-NAS models outperform some other popular neural network architectures. For example, FOX-NAS matches MobileNetV2 and EfficientNet-Lite0 accuracy with 240% and 40% less latency on the edge CPU. Search code and pre-trained models are released at https://github.com/great8nctu/FOX-NAS.1

Cite

Text

Liu et al. "FOX-NAS: Fast, On-Device and Explainable Neural Architecture Search." IEEE/CVF International Conference on Computer Vision Workshops, 2021. doi:10.1109/ICCVW54120.2021.00093

Markdown

[Liu et al. "FOX-NAS: Fast, On-Device and Explainable Neural Architecture Search." IEEE/CVF International Conference on Computer Vision Workshops, 2021.](https://mlanthology.org/iccvw/2021/liu2021iccvw-foxnas/) doi:10.1109/ICCVW54120.2021.00093

BibTeX

@inproceedings{liu2021iccvw-foxnas,
  title     = {{FOX-NAS: Fast, On-Device and Explainable Neural Architecture Search}},
  author    = {Liu, Chia-Hsiang and Han, Yu-Shin and Sung, Yuan-Yao and Lee, Yi and Chiang, Hung-Yueh and Wu, Kai-Chiang},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2021},
  pages     = {789-797},
  doi       = {10.1109/ICCVW54120.2021.00093},
  url       = {https://mlanthology.org/iccvw/2021/liu2021iccvw-foxnas/}
}