Not All Layers of LLMs Are Necessary During Inference
Abstract
Due to the large number of parameters, the inference phase of Large Language Models (LLMs) is resource-intensive. However, not all requests posed to LLMs are equally difficult to handle. Through analysis, we show that for some tasks, LLMs can achieve results comparable to the final output at some intermediate layers. That is, not all layers of LLMs are necessary during inference. If we can predict at which layer the inferred results match the final results (produced by evaluating all layers), we could significantly reduce the inference cost. To this end, we propose a simple yet effective algorithm named AdaInfer to adaptively terminate the inference process for an input instance. AdaInfer relies on easily obtainable statistical features and classic classifiers like SVM. Experiments on well-known LLMs like the Llama2 series and OPT, show that AdaInfer can achieve an average of 17.8% pruning ratio, and up to 43% on sentiment tasks, with nearly no performance drop (<1%). Because AdaInfer does not alter LLM parameters, the LLMs incorporated with AdaInfer maintain generalizability across tasks.
Cite
Text
Fan et al. "Not All Layers of LLMs Are Necessary During Inference." International Joint Conference on Artificial Intelligence, 2025. doi:10.24963/IJCAI.2025/566Markdown
[Fan et al. "Not All Layers of LLMs Are Necessary During Inference." International Joint Conference on Artificial Intelligence, 2025.](https://mlanthology.org/ijcai/2025/fan2025ijcai-all/) doi:10.24963/IJCAI.2025/566BibTeX
@inproceedings{fan2025ijcai-all,
title = {{Not All Layers of LLMs Are Necessary During Inference}},
author = {Fan, Siqi and Jiang, Xin and Li, Xiang and Meng, Xuying and Han, Peng and Shang, Shuo and Sun, Aixin and Wang, Yequan},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2025},
pages = {5083-5091},
doi = {10.24963/IJCAI.2025/566},
url = {https://mlanthology.org/ijcai/2025/fan2025ijcai-all/}
}