Universally Invariant Learning in Equivariant GNNs
Abstract
Equivariant Graph Neural Networks (GNNs) have demonstrated significant success across various applications. To achieve completeness---that is, the universal approximation property over the space of equivariant functions---the network must effectively capture the intricate multi-body interactions among different nodes. Prior methods attain this via deeper architectures, augmented body orders, or increased degrees of steerable features, often at high computational cost and without polynomial-time solutions. In this work, we present a theoretically grounded framework for constructing complete equivariant GNNs that is both efficient and practical. We prove that a complete equivariant GNN can be achieved through two key components: 1) a complete scalar function, referred to as the canonical form of the geometric graph; and 2) a full-rank steerable basis set. Leveraging this finding, we propose an efficient algorithm for constructing complete equivariant GNNs based on two common models: EGNN and TFN. Empirical results demonstrate that our model demonstrates superior completeness and excellent performance with only a few layers, thereby significantly reducing computational overhead while maintaining strong practical efficacy.
Cite
Text
Cen et al. "Universally Invariant Learning in Equivariant GNNs." Advances in Neural Information Processing Systems, 2025.Markdown
[Cen et al. "Universally Invariant Learning in Equivariant GNNs." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/cen2025neurips-universally/)BibTeX
@inproceedings{cen2025neurips-universally,
title = {{Universally Invariant Learning in Equivariant GNNs}},
author = {Cen, Jiacheng and Li, Anyi and Lin, Ning and Xu, Tingyang and Rong, Yu and Zhao, Deli and Wang, Zihe and Huang, Wenbing},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/cen2025neurips-universally/}
}