Streamlining Redundant Layers to Compress Large Language Models

Abstract

This paper introduces LLM-Streamline, a pioneer work on layer pruning for large language models (LLMs). It is based on the observation that different layers have varying impacts on hidden states, enabling the identification of less important layers to be pruned. LLM-Streamline comprises two parts: layer pruning, which removes consecutive layers with the lowest importance based on target sparsity, and layer replacement, a novel module that trains a lightweight network to replace the pruned layers to mitigate performance loss. Additionally, a new metric called stability is proposed to address the limitations of the widely used accuracy metric in evaluating model compression. Experiments show that LLM-Streamline outperforms both previous and concurrent state-of-the-art pruning methods in terms of both performance and training efficiency. Our code is available at \href{https://github.com/RUCKBReasoning/LLM-Streamline}this repository.

Cite

Text

Chen et al. "Streamlining Redundant Layers to Compress Large Language Models." International Conference on Learning Representations, 2025.

Markdown

[Chen et al. "Streamlining Redundant Layers to Compress Large Language Models." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/chen2025iclr-streamlining/)

BibTeX

@inproceedings{chen2025iclr-streamlining,
  title     = {{Streamlining Redundant Layers to Compress Large Language Models}},
  author    = {Chen, Xiaodong and Hu, Yuxuan and Zhang, Jing and Wang, Yanling and Li, Cuiping and Chen, Hong},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/chen2025iclr-streamlining/}
}