Universal Equivariant Multilayer Perceptrons
Abstract
Group invariant and equivariant Multilayer Perceptrons (MLP), also known as Equivariant Networks and Group Group Convolutional Neural Networks (G-CNN) have achieved remarkable success in learning on a variety of data structures, such as sequences, images, sets, and graphs. This paper proves the universality of a broad class of equivariant MLPs with a single hidden layer. In particular, it is shown that having a hidden layer on which the group acts regularly is sufficient for universal equivariance (invariance). For example, some types of steerable-CNN’s become universal. Another corollary is the unconditional universality of equivariant MLPs for all Abelian groups. A third corollary is the universality of equivariant MLPs with a high-order hidden layer, where we give both group-agnostic bounds and group-specific bounds on the order of the hidden layer that guarantees universal equivariance.
Cite
Text
Ravanbakhsh. "Universal Equivariant Multilayer Perceptrons." International Conference on Machine Learning, 2020.Markdown
[Ravanbakhsh. "Universal Equivariant Multilayer Perceptrons." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/ravanbakhsh2020icml-universal/)BibTeX
@inproceedings{ravanbakhsh2020icml-universal,
title = {{Universal Equivariant Multilayer Perceptrons}},
author = {Ravanbakhsh, Siamak},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {7996-8006},
volume = {119},
url = {https://mlanthology.org/icml/2020/ravanbakhsh2020icml-universal/}
}