SparseLLM: Towards Global Pruning of Pre-Trained Language Models

Abstract

The transformative impact of large language models (LLMs) like LLaMA and GPT on natural language processing is countered by their prohibitive computational demands. Pruning has emerged as a pivotal compression strategy, introducing sparsity to enhance both memory and computational efficiency. Yet, traditional global pruning is impractical for LLMs due to scalability issues, while local pruning, despite its efficiency, leads to suboptimal solutions. Addressing these challenges, we propose SparseLLM, a novel framework that redefines the global pruning process into manageable, coordinated subproblems, allowing for resource-efficient optimization with global optimality. SparseLLM's approach, which conceptualizes LLMs as a chain of modular functions and leverages auxiliary variables for problem decomposition, not only facilitates a pragmatic application on LLMs but also demonstrates significant performance improvements, particularly in high-sparsity regimes where it surpasses current state-of-the-art methods. Our source code is publicly available at https://github.com/BaiTheBest/SparseLLM.

Cite

Text

Bai et al. "SparseLLM: Towards Global Pruning of Pre-Trained Language Models." Neural Information Processing Systems, 2024. doi:10.52202/079017-1468

Markdown

[Bai et al. "SparseLLM: Towards Global Pruning of Pre-Trained Language Models." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/bai2024neurips-sparsellm/) doi:10.52202/079017-1468

BibTeX

@inproceedings{bai2024neurips-sparsellm,
  title     = {{SparseLLM: Towards Global Pruning of Pre-Trained Language Models}},
  author    = {Bai, Guangji and Li, Yijiang and Ling, Chen and Kim, Kibaek and Zhao, Liang},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-1468},
  url       = {https://mlanthology.org/neurips/2024/bai2024neurips-sparsellm/}
}