Learning Light-Weight Translation Models from Deep Transformer

Abstract

Recently, deep models have shown tremendous improvements in neural machine translation (NMT). However, systems of this kind are computationally expensive and memory intensive. In this paper, we take a natural step towards learning strong but light-weight NMT systems. We proposed a novel group-permutation based knowledge distillation approach to compressing the deep Transformer model into a shallow model. The experimental results on several benchmarks validate the effectiveness of our method. Our compressed model is 8 times shallower than the deep model, with almost no loss in BLEU. To further enhance the teacher model, we present a Skipping Sub-Layer method to randomly omit sub-layers to introduce perturbation into training, which achieves a BLEU score of 30.63 on English-German newstest2014. The code is publicly available at https://github.com/libeineu/GPKD.

Cite

Text

Li et al. "Learning Light-Weight Translation Models from Deep Transformer." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I15.17561

Markdown

[Li et al. "Learning Light-Weight Translation Models from Deep Transformer." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/li2021aaai-learning-b/) doi:10.1609/AAAI.V35I15.17561

BibTeX

@inproceedings{li2021aaai-learning-b,
  title     = {{Learning Light-Weight Translation Models from Deep Transformer}},
  author    = {Li, Bei and Wang, Ziyang and Liu, Hui and Du, Quan and Xiao, Tong and Zhang, Chunliang and Zhu, Jingbo},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {13217-13225},
  doi       = {10.1609/AAAI.V35I15.17561},
  url       = {https://mlanthology.org/aaai/2021/li2021aaai-learning-b/}
}