NASOA: Towards Faster Task-Oriented Online Fine-Tuning with a Zoo of Models
Abstract
Fine-tuning from pre-trained ImageNet models has been a simple, effective, and popular approach for various computer vision tasks. The common practice of fine-tuning is to adopt a default hyperparameter setting with a fixed pre-trained model, while both of them are not optimized for specific tasks and time constraints. Moreover, in cloud computing or GPU clusters where the tasks arrive sequentially in a stream, faster online fine-tuning is a more desired and realistic strategy for saving money, energy consumption, and CO2 emission. In this paper, we propose a joint Neural Architecture Search and Online Adaption framework named NASOA towards a faster task-oriented fine-tuning upon the request of users. Specifically, NASOA first adopts an offline NAS to identify a group of training-efficient networks to form a pretrained model zoo. We propose a novel joint block and macro level search space to enable a flexible and efficient search. Then, by estimating fine-tuning performance via an adaptive model by accumulating experience from the past tasks, an online schedule generator is proposed to pick up the most suitable model and generate a personalized training regime with respect to each desired task in a one-shot fashion. The resulting model zoo is more training efficient than SOTA NAS models, e.g. 6x faster than RegNetY-16GF, and 1.7x faster than EfficientNetB3. Experiments on multiple datasets also show that NASOA achieves much better fine-tuning results, i.e. improving around 2.1% accuracy than the best performance in RegNet series under various time constraints and tasks; 40x faster compared to the BOHB method.
Cite
Text
Xu et al. "NASOA: Towards Faster Task-Oriented Online Fine-Tuning with a Zoo of Models." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00505Markdown
[Xu et al. "NASOA: Towards Faster Task-Oriented Online Fine-Tuning with a Zoo of Models." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/xu2021iccv-nasoa/) doi:10.1109/ICCV48922.2021.00505BibTeX
@inproceedings{xu2021iccv-nasoa,
title = {{NASOA: Towards Faster Task-Oriented Online Fine-Tuning with a Zoo of Models}},
author = {Xu, Hang and Kang, Ning and Zhang, Gengwei and Xie, Chuanlong and Liang, Xiaodan and Li, Zhenguo},
booktitle = {International Conference on Computer Vision},
year = {2021},
pages = {5097-5106},
doi = {10.1109/ICCV48922.2021.00505},
url = {https://mlanthology.org/iccv/2021/xu2021iccv-nasoa/}
}