SKDBERT: Compressing BERT via Stochastic Knowledge Distillation
Abstract
In this paper, we propose Stochastic Knowledge Distillation (SKD) to obtain compact BERT-style language model dubbed SKDBERT. In each distillation iteration, SKD samples a teacher model from a pre-defined teacher team, which consists of multiple teacher models with multi-level capacities, to transfer knowledge into student model in an one-to-one manner. Sampling distribution plays an important role in SKD. We heuristically present three types of sampling distributions to assign appropriate probabilities for multi-level teacher models. SKD has two advantages: 1) it can preserve the diversities of multi-level teacher models via stochastically sampling single teacher model in each distillation iteration, and 2) it can also improve the efficacy of knowledge distillation via multi-level teacher models when large capacity gap exists between the teacher model and the student model. Experimental results on GLUE benchmark show that SKDBERT reduces the size of a BERT model by 40% while retaining 99.5% performances of language understanding and being 100% faster.
Cite
Text
Ding et al. "SKDBERT: Compressing BERT via Stochastic Knowledge Distillation." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I6.25902Markdown
[Ding et al. "SKDBERT: Compressing BERT via Stochastic Knowledge Distillation." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/ding2023aaai-skdbert/) doi:10.1609/AAAI.V37I6.25902BibTeX
@inproceedings{ding2023aaai-skdbert,
title = {{SKDBERT: Compressing BERT via Stochastic Knowledge Distillation}},
author = {Ding, Zixiang and Jiang, Guoqing and Zhang, Shuai and Guo, Lin and Lin, Wei},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {7414-7422},
doi = {10.1609/AAAI.V37I6.25902},
url = {https://mlanthology.org/aaai/2023/ding2023aaai-skdbert/}
}