PoWER-BERT: Accelerating BERT Inference via Progressive Word-Vector Elimination
Abstract
We develop a novel method, called PoWER-BERT, for improving the inference time of the popular BERT model, while maintaining the accuracy. It works by: a) exploiting redundancy pertaining to word-vectors (intermediate transformer block outputs) and eliminating the redundant vectors. b) determining which word-vectors to eliminate by developing a strategy for measuring their significance, based on the self-attention mechanism. c) learning how many word-vectors to eliminate by augmenting the BERT model and the loss function. Experiments on the standard GLUE benchmark shows that PoWER-BERT achieves up to 4.5x reduction in inference time over BERT with < 1% loss in accuracy. We show that PoWER-BERT offers significantly better trade-off between accuracy and inference time compared to prior methods. We demonstrate that our method attains up to 6.8x reduction in inference time with < 1% loss in accuracy when applied over ALBERT, a highly compressed version of BERT. The code for PoWER-BERT is publicly available at https://github.com/IBM/PoWER-BERT.
Cite
Text
Goyal et al. "PoWER-BERT: Accelerating BERT Inference via Progressive Word-Vector Elimination." International Conference on Machine Learning, 2020.Markdown
[Goyal et al. "PoWER-BERT: Accelerating BERT Inference via Progressive Word-Vector Elimination." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/goyal2020icml-powerbert/)BibTeX
@inproceedings{goyal2020icml-powerbert,
title = {{PoWER-BERT: Accelerating BERT Inference via Progressive Word-Vector Elimination}},
author = {Goyal, Saurabh and Choudhury, Anamitra Roy and Raje, Saurabh and Chakaravarthy, Venkatesan and Sabharwal, Yogish and Verma, Ashish},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {3690-3699},
volume = {119},
url = {https://mlanthology.org/icml/2020/goyal2020icml-powerbert/}
}