Projective Pruning for Decoupling Weights
Abstract
This paper proposes Projective Pruning, a structured deep neural network sparsification technique that removes highly correlated weights, as they provide a minimal contribution to the parameter subspace. Due to the inefficiencies in deep neural networks caused by excessive overparametrization and highly correlated weights, the method enables parameter compression while maintaining the high performance of the models. The approach incorporates a redistribution mechanism to preserve model performance and expressiveness. Evaluations on multiple vision and language benchmarks, including large language model architectures, demonstrate that, unlike most other pruning methods, Projective Pruning delivers reliable compression while ensuring stable model performance. Applying this method improves retrainability and achieves competitive results compared to existing structured pruning methods.
Cite
Text
Chu and Kovalenko. "Projective Pruning for Decoupling Weights." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2025. doi:10.1007/978-3-032-06106-5_19Markdown
[Chu and Kovalenko. "Projective Pruning for Decoupling Weights." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2025.](https://mlanthology.org/ecmlpkdd/2025/chu2025ecmlpkdd-projective/) doi:10.1007/978-3-032-06106-5_19BibTeX
@inproceedings{chu2025ecmlpkdd-projective,
title = {{Projective Pruning for Decoupling Weights}},
author = {Chu, Tommy and Kovalenko, Alexander},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2025},
pages = {322-339},
doi = {10.1007/978-3-032-06106-5_19},
url = {https://mlanthology.org/ecmlpkdd/2025/chu2025ecmlpkdd-projective/}
}