Towards Fully 8-Bit Integer Inference for the Transformer Model
Abstract
8-bit integer inference, as a promising direction in reducing both the latency and storage of deep neural networks, has made great progress recently. On the other hand, previous systems still rely on 32-bit floating point for certain functions in complex models (e.g., Softmax in Transformer), and make heavy use of quantization and de-quantization. In this work, we show that after a principled modification on the Transformer architecture, dubbed Integer Transformer, an (almost) fully 8-bit integer inference algorithm Scale Propagation could be derived. De-quantization is adopted when necessary, which makes the network more efficient. Our experiments on WMT16 En<->Ro, WMT14 En<->De and En->Fr translation tasks as well as the WikiText-103 language modelling task show that the fully 8-bit Transformer system achieves comparable performance with the floating point baseline but requires nearly 4x less memory footprint.
Cite
Text
Lin et al. "Towards Fully 8-Bit Integer Inference for the Transformer Model." International Joint Conference on Artificial Intelligence, 2020. doi:10.24963/IJCAI.2020/520Markdown
[Lin et al. "Towards Fully 8-Bit Integer Inference for the Transformer Model." International Joint Conference on Artificial Intelligence, 2020.](https://mlanthology.org/ijcai/2020/lin2020ijcai-fully/) doi:10.24963/IJCAI.2020/520BibTeX
@inproceedings{lin2020ijcai-fully,
title = {{Towards Fully 8-Bit Integer Inference for the Transformer Model}},
author = {Lin, Ye and Li, Yanyang and Liu, Tengbo and Xiao, Tong and Liu, Tongran and Zhu, Jingbo},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2020},
pages = {3759-3765},
doi = {10.24963/IJCAI.2020/520},
url = {https://mlanthology.org/ijcai/2020/lin2020ijcai-fully/}
}