E(n) Equivariant Graph Neural Networks
Abstract
This paper introduces a new model to learn graph neural networks equivariant to rotations, translations, reflections and permutations called E(n)-Equivariant Graph Neural Networks (EGNNs). In contrast with existing methods, our work does not require computationally expensive higher-order representations in intermediate layers while it still achieves competitive or better performance. In addition, whereas existing methods are limited to equivariance on 3 dimensional spaces, our model is easily scaled to higher-dimensional spaces. We demonstrate the effectiveness of our method on dynamical systems modelling, representation learning in graph autoencoders and predicting molecular properties.
Cite
Text
Satorras et al. "E(n) Equivariant Graph Neural Networks." International Conference on Machine Learning, 2021.Markdown
[Satorras et al. "E(n) Equivariant Graph Neural Networks." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/satorras2021icml-equivariant/)BibTeX
@inproceedings{satorras2021icml-equivariant,
title = {{E(n) Equivariant Graph Neural Networks}},
author = {Satorras, Vı́ctor Garcia and Hoogeboom, Emiel and Welling, Max},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {9323-9332},
volume = {139},
url = {https://mlanthology.org/icml/2021/satorras2021icml-equivariant/}
}