SLoPe: Double-Pruned Sparse Plus Lazy Low-Rank Adapter Pretraining of LLMs

Abstract

We propose SLoPe, a Double-Pruned **S**parse Plus **L**azy L**o**w-rank Adapter **P**r**e**training method for LLMs that improves the accuracy of sparse LLMs while accelerating their pretraining and inference and reducing their memory footprint. Sparse pretraining of LLMs reduces the accuracy of the model, to overcome this, prior work uses dense models during fine-tuning. SLoPe improves the accuracy of sparsely pretrained models by adding low-rank adapters in the final 1% iterations of pretraining without adding significant overheads to the model pretraining and inference. In addition, SLoPe uses a double-pruned backward pass formulation that prunes the transposed weight matrix using N:M sparsity structures to enable an accelerated sparse backward pass. SLoPe accelerates the training and inference of models with billions of parameters up to 1.25× and 1.54× respectively (OPT-33B and OPT-66B) while reducing their memory usage by up to 0.63× and 0.61× for training and inference respectively.

Cite

Text

Mozaffari et al. "SLoPe: Double-Pruned Sparse Plus Lazy Low-Rank Adapter Pretraining of LLMs." International Conference on Learning Representations, 2025.

Markdown

[Mozaffari et al. "SLoPe: Double-Pruned Sparse Plus Lazy Low-Rank Adapter Pretraining of LLMs." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/mozaffari2025iclr-slope/)

BibTeX

@inproceedings{mozaffari2025iclr-slope,
  title     = {{SLoPe: Double-Pruned Sparse Plus Lazy Low-Rank Adapter Pretraining of LLMs}},
  author    = {Mozaffari, Mohammad and Yazdanbakhsh, Amir and Zhang, Zhao and Dehnavi, Maryam Mehri},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/mozaffari2025iclr-slope/}
}