Deep Model Compression via Two-Stage Deep Reinforcement Learning

Abstract

Besides accuracy, the model size of convolutional neural networks (CNN) models is another important factor considering limited hardware resources in practical applications. For example, employing deep neural networks on mobile systems requires the design of accurate yet fast CNN for low latency in classification and object detection. To fulfill the need, we aim at obtaining CNN models with both high testing accuracy and small size to address resource constraints in many embedded devices. In particular, this paper focuses on proposing a generic reinforcement learning-based model compression approach in a two-stage compression pipeline: pruning and quantization. The first stage of compression, i.e., pruning, is achieved via exploiting deep reinforcement learning (DRL) to colearn the accuracy and the FLOPs updated after layer-wise channel pruning and element-wise variational pruning via information dropout. The second stage, i.e., quantization, is achieved via a similar DRL approach but focuses on obtaining the optimal bits representation for individual layers. We further conduct experimental results on CIFAR-10 and ImageNet datasets. For the CIFAR-10 dataset, the proposed method can reduce the size of VGGNet by 9× from 20.04MB to 2.2MB with a slight accuracy increase. For the ImageNet dataset, the proposed method can reduce the size of VGG-16 by 33× from 138MB to 4.14MB with no accuracy loss.

Cite

Text

Zhan et al. "Deep Model Compression via Two-Stage Deep Reinforcement Learning." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2021. doi:10.1007/978-3-030-86486-6_15

Markdown

[Zhan et al. "Deep Model Compression via Two-Stage Deep Reinforcement Learning." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2021.](https://mlanthology.org/ecmlpkdd/2021/zhan2021ecmlpkdd-deep/) doi:10.1007/978-3-030-86486-6_15

BibTeX

@inproceedings{zhan2021ecmlpkdd-deep,
  title     = {{Deep Model Compression via Two-Stage Deep Reinforcement Learning}},
  author    = {Zhan, Huixin and Lin, Wei-Ming and Cao, Yongcan},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2021},
  pages     = {238-254},
  doi       = {10.1007/978-3-030-86486-6_15},
  url       = {https://mlanthology.org/ecmlpkdd/2021/zhan2021ecmlpkdd-deep/}
}