The Surprising Effectiveness of Randomness in LLM Pruning
Abstract
This paper investigates the structured pruning of large language models (LLMs). We find that random pruning, despite its simplicity, is a surprisingly effective baseline, particularly at lower pruning ratios. We further propose a simple and efficient method that combines randomness with existing pruning heuristics. Specifically, our method combines random neuron clustering with activation magnitude pruning, exhibiting performance comparable to gradient-based methods while being significantly more efficient (up to 50x faster). Our code is available at https://github.com/Tim-Siu/llm-random-prune.
Cite
Text
Xu et al. "The Surprising Effectiveness of Randomness in LLM Pruning." ICLR 2025 Workshops: SLLM, 2025.Markdown
[Xu et al. "The Surprising Effectiveness of Randomness in LLM Pruning." ICLR 2025 Workshops: SLLM, 2025.](https://mlanthology.org/iclrw/2025/xu2025iclrw-surprising/)BibTeX
@inproceedings{xu2025iclrw-surprising,
title = {{The Surprising Effectiveness of Randomness in LLM Pruning}},
author = {Xu, Shuyao and Jiayao, Liu and He, Zhenfeng and Peng, Cheng and Xu, Weidi},
booktitle = {ICLR 2025 Workshops: SLLM},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/xu2025iclrw-surprising/}
}