Symmetry Breaking and Equivariant Neural Networks

Abstract

Using symmetry as an inductive bias in deep learning has been proven to be a principled approach for sample-efficient model design. However, the relationship between symmetry and the imperative for equivariance in neural networks is not always obvious. Here, we analyze a key limitation that arises in equivariant functions: their incapacity to break symmetry at the level of individual data samples. In response, we introduce a novel notion of 'relaxed equivariance' that circumvents this limitation. We further demonstrate how to incorporate this relaxation into equivariant multilayer perceptrons (E-MLPs), offering an alternative to the noise-injection method. The relevance of symmetry breaking is then discussed in various application domains: physics, graph representation learning, combinatorial optimization and equivariant decoding.

Cite

Text

Kaba and Ravanbakhsh. "Symmetry Breaking and Equivariant Neural  Networks." NeurIPS 2023 Workshops: NeurReps, 2023.

Markdown

[Kaba and Ravanbakhsh. "Symmetry Breaking and Equivariant Neural  Networks." NeurIPS 2023 Workshops: NeurReps, 2023.](https://mlanthology.org/neuripsw/2023/kaba2023neuripsw-symmetry/)

BibTeX

@inproceedings{kaba2023neuripsw-symmetry,
  title     = {{Symmetry Breaking and Equivariant Neural  Networks}},
  author    = {Kaba, Sékou-Oumar and Ravanbakhsh, Siamak},
  booktitle = {NeurIPS 2023 Workshops: NeurReps},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/kaba2023neuripsw-symmetry/}
}