BERT Loses Patience: Fast and Robust Inference with Early Exit
Abstract
In this paper, we propose Patience-based Early Exit, a straightforward yet effective inference method that can be used as a plug-and-play technique to simultaneously improve the efficiency and robustness of a pretrained language model (PLM). To achieve this, our approach couples an internal-classifier with each layer of a PLM and dynamically stops inference when the intermediate predictions of the internal classifiers do not change for a pre-defined number of steps. Our approach improves inference efficiency as it allows the model to make a prediction with fewer layers. Meanwhile, experimental results with an ALBERT model show that our method can improve the accuracy and robustness of the model by preventing it from overthinking and exploiting multiple classifiers for prediction, yielding a better accuracy-speed trade-off compared to existing early exit methods.
Cite
Text
Zhou et al. "BERT Loses Patience: Fast and Robust Inference with Early Exit." Neural Information Processing Systems, 2020.Markdown
[Zhou et al. "BERT Loses Patience: Fast and Robust Inference with Early Exit." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/zhou2020neurips-bert/)BibTeX
@inproceedings{zhou2020neurips-bert,
title = {{BERT Loses Patience: Fast and Robust Inference with Early Exit}},
author = {Zhou, Wangchunshu and Xu, Canwen and Ge, Tao and McAuley, Julian and Xu, Ke and Wei, Furu},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/zhou2020neurips-bert/}
}