Expressive Sign Equivariant Networks for Spectral Geometric Learning

Abstract

Recent work has shown the utility of developing machine learning models that respect the symmetries of eigenvectors. These works promote sign invariance, since for any eigenvector $v$ the negation $-v$ is also an eigenvector. In this work, we demonstrate that sign equivariance is useful for applications such as building orthogonally equivariant models and link prediction. To obtain these benefits, we develop novel sign equivariant neural network architectures. These models are based on our analytic characterization of the sign equivariant polynomials and thus inherit provable expressiveness properties.

Cite

Text

Lim et al. "Expressive Sign Equivariant Networks for Spectral Geometric Learning." ICLR 2023 Workshops: Physics4ML, 2023.

Markdown

[Lim et al. "Expressive Sign Equivariant Networks for Spectral Geometric Learning." ICLR 2023 Workshops: Physics4ML, 2023.](https://mlanthology.org/iclrw/2023/lim2023iclrw-expressive/)

BibTeX

@inproceedings{lim2023iclrw-expressive,
  title     = {{Expressive Sign Equivariant Networks for Spectral Geometric Learning}},
  author    = {Lim, Derek and Robinson, Joshua and Jegelka, Stefanie and Lipman, Yaron and Maron, Haggai},
  booktitle = {ICLR 2023 Workshops: Physics4ML},
  year      = {2023},
  url       = {https://mlanthology.org/iclrw/2023/lim2023iclrw-expressive/}
}