Block Transformer: Global-to-Local Language Modeling for Fast Inference
Abstract
We introduce the Block Transformer which adopts hierarchical global-to-local modeling to autoregressive transformers to mitigate the inference bottlenecks associated with self-attention. Self-attention requires the key-value (KV) cache of all previous sequences to be retrieved from memory at every decoding step to retrieve context information, leading to two primary bottlenecks during batch inference. First, there is a significant delay in obtaining the first token, as the information of the entire prompt must first be processed to prefill the KV cache. Second, computation of subsequent tokens is bottlenecked by the high memory I/O demand of fetching the entire KV cache, which grows linearly with sequence length, incurring quadratic memory reads overall. We design the Block Transformer to strategically mitigate these costs, by incorporating coarsity and locality into an integrated global-to-local architecture. At the lower layers, we aggregate tokens into fixed size blocks to apply attention across the entire sequence at coarse-grained detail, to capture the global context while minimizing KV cache overhead. At upper layers, we apply attention within each block to decode individual tokens, to model fine-grained details with a lightweight local KV cache. We pretrain vanilla and Block Transformers from scratch and demonstrate that Block Transformers reach 10--20x inference throughput compared to vanilla transformers with equivalent perplexity and zero-shot task performance.
Cite
Text
Ho et al. "Block Transformer: Global-to-Local Language Modeling for Fast Inference." Neural Information Processing Systems, 2024. doi:10.52202/079017-1545Markdown
[Ho et al. "Block Transformer: Global-to-Local Language Modeling for Fast Inference." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/ho2024neurips-block/) doi:10.52202/079017-1545BibTeX
@inproceedings{ho2024neurips-block,
title = {{Block Transformer: Global-to-Local Language Modeling for Fast Inference}},
author = {Ho, Namgyu and Bae, Sangmin and Kim, Taehyeon and Jo, Hyunjik and Kim, Yireun and Schuster, Tal and Fisch, Adam and Thorne, James and Yun, Se-Young},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-1545},
url = {https://mlanthology.org/neurips/2024/ho2024neurips-block/}
}