A Proximal Operator for Inducing 2:4-Sparsity

Abstract

Recent hardware advancements in AI Accelerators and GPUs allow to efficiently compute sparse matrix multiplications, especially when 2 out of 4 consecutive weights are set to zero. However, this so-called 2:4 sparsity usually comes at a decreased accuracy of the model. We derive a regularizer that exploits the local correlation of features to find better sparsity masks in trained models. We minimize the regularizer jointly with a local squared loss by deriving the proximal operator for which we show that it has an efficient solution in the 2:4-sparse case. After optimizing the mask, we introduce masked-gradient updates to further minimize the local squared loss. We illustrate our method on toy problems and apply it to pruning entire large language models up to 70B parameters. On models up to 13B we improve over previous state of the art algorithms, whilst on 70B models we match their performance.

Cite

Text

Kübler et al. "A Proximal Operator for Inducing 2:4-Sparsity." Transactions on Machine Learning Research, 2025.

Markdown

[Kübler et al. "A Proximal Operator for Inducing 2:4-Sparsity." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/kubler2025tmlr-proximal/)

BibTeX

@article{kubler2025tmlr-proximal,
  title     = {{A Proximal Operator for Inducing 2:4-Sparsity}},
  author    = {Kübler, Jonas M. and Wang, Yu-Xiang and Sabach, Shoham and Ansari, Navid and Kleindessner, Matthäus and Budhathoki, Kailash and Cevher, Volkan and Karypis, George},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/kubler2025tmlr-proximal/}
}