AMC: AutoML for Model Compression and Acceleration on Mobile Devices

Abstract

Model compression is an effective technique to efficiently deploy neural network models on mobile devices which have limited computation resources and tight power budgets. Conventional model compression techniques rely on hand-crafted features and require domain experts to explore the large design space trading off among model size, speed, and accuracy, which is usually sub-optimal and time-consuming. In this paper, we propose AutoML for Model Compression (AMC) which leverages reinforcement learning to efficiently sample the design space and can improve the model compression quality. We achieved state-of- the-art model compression results in a fully automated way without any human efforts. Under 4× FLOPs reduction, we achieved 2.7% better accuracy than the hand-crafted model compression method for VGG-16 on ImageNet. We applied this automated, push-the-button compression pipeline to MobileNet-V1 and achieved a speedup of 1.53x on the GPU (Titan Xp) and 1.95x on an Android phone (Google Pixel 1), with negligible loss of accuracy.

Cite

Text

He et al. "AMC: AutoML for Model Compression and Acceleration on Mobile Devices." Proceedings of the European Conference on Computer Vision (ECCV), 2018. doi:10.1007/978-3-030-01234-2_48

Markdown

[He et al. "AMC: AutoML for Model Compression and Acceleration on Mobile Devices." Proceedings of the European Conference on Computer Vision (ECCV), 2018.](https://mlanthology.org/eccv/2018/he2018eccv-amc/) doi:10.1007/978-3-030-01234-2_48

BibTeX

@inproceedings{he2018eccv-amc,
  title     = {{AMC: AutoML for Model Compression and Acceleration on Mobile Devices}},
  author    = {He, Yihui and Lin, Ji and Liu, Zhijian and Wang, Hanrui and Li, Li-Jia and Han, Song},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2018},
  doi       = {10.1007/978-3-030-01234-2_48},
  url       = {https://mlanthology.org/eccv/2018/he2018eccv-amc/}
}