Any-Dimensional Equivariant Neural Networks

Abstract

Traditional supervised learning aims to learn an unknown mapping by fitting a function to a set of input-output pairs with a fixed dimension. The fitted function is then defined on inputs of the same dimension. However, in many settings, the unknown mapping takes inputs in any dimension; examples include graph parameters defined on graphs of any size and physics quantities defined on an arbitrary number of particles. We leverage a newly-discovered phenomenon in algebraic topology, called representation stability, to define equivariant neural networks that can be trained with data in a fixed dimension and then extended to accept inputs in any dimension. Our approach is black-box and user-friendly, requiring only the network architecture and the groups for equivariance, and can be combined with any training procedure. We provide a simple open-source implementation of our methods and offer preliminary numerical experiments.

Cite

Text

Levin and Diaz. "Any-Dimensional Equivariant Neural Networks." Artificial Intelligence and Statistics, 2024.

Markdown

[Levin and Diaz. "Any-Dimensional Equivariant Neural Networks." Artificial Intelligence and Statistics, 2024.](https://mlanthology.org/aistats/2024/levin2024aistats-anydimensional/)

BibTeX

@inproceedings{levin2024aistats-anydimensional,
  title     = {{Any-Dimensional Equivariant Neural Networks}},
  author    = {Levin, Eitan and Diaz, Mateo},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2024},
  pages     = {2773-2781},
  volume    = {238},
  url       = {https://mlanthology.org/aistats/2024/levin2024aistats-anydimensional/}
}