Online Ensemble Model Compression Using Knowledge Distillation

Abstract

This paper presents a novel knowledge distillation based model compression framework consisting of a student ensemble. It enables distillation of simultaneously learnt ensemble knowledge onto each of the compressed student models. Each model learns unique representations from the data distribution due to its distinct architecture. This helps the ensemble generalize better by combining every model’s knowledge. The distilled students and ensemble teacher are trained simultaneously without requiring any pretrained weights. Moreover, our proposed method can deliver multi-compressed students with single training, which is efficient and flexible for different scenarios. We provide comprehensive experiments using state-of-the-art classification models to validate our framework’s effectiveness. Notably, using our framework a 97% compressed ResNet110 student model managed to produce a 10.64% relative accuracy gain over its individual baseline training on CIFAR100 dataset. Similarly a 95% compressed DenseNet-BC (k=12) model managed a 8.17% relative accuracy gain.

Cite

Text

Walawalkar et al. "Online Ensemble Model Compression Using Knowledge Distillation." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58529-7_2

Markdown

[Walawalkar et al. "Online Ensemble Model Compression Using Knowledge Distillation." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/walawalkar2020eccv-online/) doi:10.1007/978-3-030-58529-7_2

BibTeX

@inproceedings{walawalkar2020eccv-online,
  title     = {{Online Ensemble Model Compression Using Knowledge Distillation}},
  author    = {Walawalkar, Devesh and Shen, Zhiqiang and Savvides, Marios},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58529-7_2},
  url       = {https://mlanthology.org/eccv/2020/walawalkar2020eccv-online/}
}