Accurate Sublayer Pruning for Large Language Models by Exploiting Latency and Tunability Information

Abstract

How can we accelerate large language models (LLMs) without sacrificing accuracy? The slow inference speed of LLMs hinders us to benefit from their remarkable performance in diverse applications. This is mainly because numerous sublayers are stacked together in LLMs. Sublayer pruning compresses and expedites LLMs via removing unnecessary sublayers. However, existing sublayer pruning algorithms are limited in accuracy since they naively select sublayers to prune, overlooking the different characteristics of each sublayer. In this paper, we propose SPRINT (Sublayer Pruning with Latency and Tunability Information), an accurate sublayer pruning method for LLMs. SPRINT accurately selects a target sublayer to prune by considering 1) the amount of latency reduction after pruning and 2) the tunability of sublayers. SPRINT iteratively prunes redundant sublayers and swiftly tunes the parameters of remaining sublayers. Experiments show that SPRINT achieves the best accuracy-speedup trade-off, exhibiting up to 23.88%p higher accuracy on zero-shot commonsense reasoning benchmarks compared to existing pruning algorithms.

Cite

Text

Park et al. "Accurate Sublayer Pruning for Large Language Models by Exploiting Latency and Tunability Information." International Joint Conference on Artificial Intelligence, 2025. doi:10.24963/IJCAI.2025/913

Markdown

[Park et al. "Accurate Sublayer Pruning for Large Language Models by Exploiting Latency and Tunability Information." International Joint Conference on Artificial Intelligence, 2025.](https://mlanthology.org/ijcai/2025/park2025ijcai-accurate/) doi:10.24963/IJCAI.2025/913

BibTeX

@inproceedings{park2025ijcai-accurate,
  title     = {{Accurate Sublayer Pruning for Large Language Models by Exploiting Latency and Tunability Information}},
  author    = {Park, Seungcheol and Lee, Sojin and Kim, Jongjin and Lee, Jinsik and Jo, Hyunjik and Kang, U},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {8213-8221},
  doi       = {10.24963/IJCAI.2025/913},
  url       = {https://mlanthology.org/ijcai/2025/park2025ijcai-accurate/}
}