Attention Is All You Need but You Don’t Need All of It for Inference of Large Language Models
Abstract
The inference demand for LLMs has skyrocketed in recent months, and serving models with low latencies remains challenging due to the quadratic input length complexity of the attention layers. In this work, we investigate the effect of dropping MLP and attention layers at inference time on the performance of Llama-v2 models. We find that dropping deeper attention layers only marginally decreases performance but leads to the best speedups alongside dropping entire layers. For example, removing 33\% of attention layers in a 13B Llama2 model results in a 0.9\% drop in average performance over the OpenLLM benchmark. We also observe that skipping layers except the latter layers reduces performances for more layers skipped, except for skipping the attention layers.
Cite
Text
Tyukin et al. "Attention Is All You Need but You Don’t Need All of It for Inference of Large Language Models." ICML 2024 Workshops: TF2M, 2024.Markdown
[Tyukin et al. "Attention Is All You Need but You Don’t Need All of It for Inference of Large Language Models." ICML 2024 Workshops: TF2M, 2024.](https://mlanthology.org/icmlw/2024/tyukin2024icmlw-attention/)BibTeX
@inproceedings{tyukin2024icmlw-attention,
title = {{Attention Is All You Need but You Don’t Need All of It for Inference of Large Language Models}},
author = {Tyukin, Georgy and Dovonon, Gbetondji Jean-Sebastien and Kaddour, Jean and Minervini, Pasquale},
booktitle = {ICML 2024 Workshops: TF2M},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/tyukin2024icmlw-attention/}
}