Zeroth-Order Adaptive Neuron Alignment Based Pruning Without Re-Training

Abstract

Network pruning focuses on algorithms that aim to reduce a given model's computational cost by removing a subset of its parameters while having minimal impact on performance. Throughout the last decade, the most widely used pruning paradigm has been pruning and re-training, which nowadays is inconvenient due to the vast amount of pre-trained models, which are, in any case, too expensive to re-train. In this paper, we exploit functional information from dense pre-trained models, i.e., their input activations, to obtain sparse models that maximize the activations' alignment with respect to their corresponding dense models. Hence, we propose \algname, a \emph{top-up} algorithm that can be used on top of any given pruning algorithm for LLMs, which modifies the block-wise and row-wise sparsity, exploiting information from both the dense model and its sparse version to maximize the \emph{neuron alignment} among activations. Different from existing methods, our approach adaptively selects the best hyperparameters for the block-wise and row-wise sparsity ratios w.r.t. the model and the desired sparsity, and requires \emph{no re-training}. We test our method over $\sim$300 test cases with four LLM families, three sparsity ratios, and ten language tasks (three language modeling and seven zero-shot datasets), showing how it consistently outperforms the latest state-of-the-art methods in terms of performance-runtime trade-off.

Cite

Text

Cunegatti et al. "Zeroth-Order Adaptive Neuron Alignment Based Pruning Without Re-Training." Transactions on Machine Learning Research, 2025.

Markdown

[Cunegatti et al. "Zeroth-Order Adaptive Neuron Alignment Based Pruning Without Re-Training." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/cunegatti2025tmlr-zerothorder/)

BibTeX

@article{cunegatti2025tmlr-zerothorder,
  title     = {{Zeroth-Order Adaptive Neuron Alignment Based Pruning Without Re-Training}},
  author    = {Cunegatti, Elia and Custode, Leonardo Lucio and Iacca, Giovanni},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/cunegatti2025tmlr-zerothorder/}
}