Pruning for GNNs: Lower Complexity with Comparable Expressiveness

Abstract

In recent years, the pursuit of higher expressive power in graph neural networks (GNNs) has often led to more complex aggregation mechanisms and deeper architectures. To address these issues, we have identified redundant structures in GNNs, and by pruning them, we propose Pruned MP-GNNs, K-Path GNNs, and K-Hop GNNs based on their original architectures. We show that 1) Although some structures are pruned in Pruned MP-GNNs and Pruned K-Path GNNs, their expressive power has not been compromised. 2) K-Hop MP-GNNs and their pruned architecture exhibit equivalent expressiveness on regular and strongly regular graphs. 3) The complexity of pruned K-Path GNNs and pruned K-Hop GNNs is lower than that of MP-GNNs, yet their expressive power is higher. Experimental results validate our refinements, demonstrating competitive performance across benchmark datasets with improved efficiency.

Cite

Text

Ma et al. "Pruning for GNNs: Lower Complexity with Comparable Expressiveness." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Ma et al. "Pruning for GNNs: Lower Complexity with Comparable Expressiveness." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/ma2025icml-pruning/)

BibTeX

@inproceedings{ma2025icml-pruning,
  title     = {{Pruning for GNNs: Lower Complexity with Comparable Expressiveness}},
  author    = {Ma, Dun and Chen, Jianguo and Yang, Wenguo and Gao, Suixiang and Chen, Shengminjie},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {41854-41889},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/ma2025icml-pruning/}
}