Reducing SO(3) Convolutions to SO(2) for Efficient Equivariant GNNs

Abstract

Graph neural networks that model 3D data, such as point clouds or atoms, are typically desired to be $SO(3)$ equivariant, i.e., equivariant to 3D rotations. Unfortunately equivariant convolutions, which are a fundamental operation for equivariant networks, increase significantly in computational complexity as higher-order tensors are used. In this paper, we address this issue by reducing the $SO(3)$ convolutions or tensor products to mathematically equivalent convolutions in $SO(2)$ . This is accomplished by aligning the node embeddings’ primary axis with the edge vectors, which sparsifies the tensor product and reduces the computational complexity from $O(L^6)$ to $O(L^3)$, where $L$ is the degree of the representation. We demonstrate the potential implications of this improvement by proposing the Equivariant Spherical Channel Network (eSCN), a graph neural network utilizing our novel approach to equivariant convolutions, which achieves state-of-the-art results on the large-scale OC-20 and OC-22 datasets.

Cite

Text

Passaro and Zitnick. "Reducing SO(3) Convolutions to SO(2) for Efficient Equivariant GNNs." International Conference on Machine Learning, 2023.

Markdown

[Passaro and Zitnick. "Reducing SO(3) Convolutions to SO(2) for Efficient Equivariant GNNs." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/passaro2023icml-reducing/)

BibTeX

@inproceedings{passaro2023icml-reducing,
  title     = {{Reducing SO(3) Convolutions to SO(2) for Efficient Equivariant GNNs}},
  author    = {Passaro, Saro and Zitnick, C. Lawrence},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {27420-27438},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/passaro2023icml-reducing/}
}