Tandem Transformers for Inference Efficient LLMs
Abstract
The autoregressive nature of conventional large language models (LLMs) inherently limits inference speed, as tokens are generated sequentially. While speculative (Leviathan et al., 2023) and parallel (Stern et al., 2018) decoding techniques attempt to mitigate this, they face limitations: either relying on less accurate smaller models for generation or failing to fully leverage the base LLM’s representations. We introduce a novel architecture, Tandem transformers, to address these issues. This architecture uniquely combines (1) a small autoregressive model and (2) a large model operating in block mode (processing multiple tokens simultaneously). The small model’s predictive accuracy is substantially enhanced by granting it attention to the large model’s richer representations. On the PaLM2 pretraining dataset, a tandem of PaLM2-Bison and PaLM2-Gecko demonstrates a 3.3% improvement in next-token prediction accuracy over a standalone PaLM2-Gecko, offering a 1.16x speedup compared to a PaLM2-Otter model with comparable downstream performance. We further incorporate the Tandem model within the speculative decoding (SPEED) framework where the large model validates tokens from the small model. This ensures that the tandem of PaLM2-Bison and PaLM2-Gecko achieves substantial speedup (around 1.14x faster than using vanilla PaLM2-Gecko in SPEED) while maintaining identical downstream task accuracy.
Cite
Text
Aishwarya et al. "Tandem Transformers for Inference Efficient LLMs." International Conference on Machine Learning, 2024.Markdown
[Aishwarya et al. "Tandem Transformers for Inference Efficient LLMs." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/s2024icml-tandem/)BibTeX
@inproceedings{s2024icml-tandem,
title = {{Tandem Transformers for Inference Efficient LLMs}},
author = {Aishwarya, P S and Nair, Pranav Ajit and Yashas Samaga, B L and Boyd, Toby James and Kumar, Sanjiv and Jain, Prateek and Netrapalli, Praneeth},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {42906-42917},
volume = {235},
url = {https://mlanthology.org/icml/2024/s2024icml-tandem/}
}