A Survey on Model Compression and Acceleration for Pretrained Language Models
Abstract
Despite achieving state-of-the-art performance on many NLP tasks, the high energy cost and long inference delay prevent Transformer-based pretrained language models (PLMs) from seeing broader adoption including for edge and mobile computing. Efficient NLP research aims to comprehensively consider computation, time and carbon emission for the entire life-cycle of NLP, including data preparation, model training and inference. In this survey, we focus on the inference stage and review the current state of model compression and acceleration for pretrained language models, including benchmarks, metrics and methodology.
Cite
Text
Xu and McAuley. "A Survey on Model Compression and Acceleration for Pretrained Language Models." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I9.26255Markdown
[Xu and McAuley. "A Survey on Model Compression and Acceleration for Pretrained Language Models." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/xu2023aaai-survey/) doi:10.1609/AAAI.V37I9.26255BibTeX
@inproceedings{xu2023aaai-survey,
title = {{A Survey on Model Compression and Acceleration for Pretrained Language Models}},
author = {Xu, Canwen and McAuley, Julian J.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {10566-10575},
doi = {10.1609/AAAI.V37I9.26255},
url = {https://mlanthology.org/aaai/2023/xu2023aaai-survey/}
}