Apprentice: Using Knowledge Distillation Techniques to Improve Low-Precision Network Accuracy

Abstract

Deep learning networks have achieved state-of-the-art accuracies on computer vision workloads like image classification and object detection. The performant systems, however, typically involve big models with numerous parameters. Once trained, a challenging aspect for such top performing models is deployment on resource constrained inference systems -- the models (often deep networks or wide networks or both) are compute and memory intensive. Low precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models. In this paper, we study the combination of these two techniques and show that the performance of low precision networks can be significantly improved by using knowledge distillation techniques. We call our approach Apprentice and show state-of-the-art accuracies using ternary precision and 4-bit precision for many variants of ResNet architecture on ImageNet dataset. We study three schemes in which one can apply knowledge distillation techniques to various stages of the train-and-deploy pipeline.

Cite

Text

Mishra and Marr. "Apprentice: Using Knowledge Distillation Techniques to Improve Low-Precision Network Accuracy." International Conference on Learning Representations, 2018.

Markdown

[Mishra and Marr. "Apprentice: Using Knowledge Distillation Techniques to Improve Low-Precision Network Accuracy." International Conference on Learning Representations, 2018.](https://mlanthology.org/iclr/2018/mishra2018iclr-apprentice/)

BibTeX

@inproceedings{mishra2018iclr-apprentice,
  title     = {{Apprentice: Using Knowledge Distillation Techniques to Improve Low-Precision Network Accuracy}},
  author    = {Mishra, Asit and Marr, Debbie},
  booktitle = {International Conference on Learning Representations},
  year      = {2018},
  url       = {https://mlanthology.org/iclr/2018/mishra2018iclr-apprentice/}
}