Improving Equivariant Networks with Probabilistic Symmetry Breaking
Abstract
Equivariance builds known symmetries into neural networks, often improving generalization. However, equivariant networks cannot break self-symmetries present in any given input. This poses an important problem: (1) for prediction tasks on symmetric domains, and (2) for generative models, which must break symmetries in order to reconstruct from highly symmetric latent spaces. Thus, equivariant networks are fundamentally limited when applied to these contexts. To remedy this, we present a comprehensive, probabilistic framework for symmetry-breaking, based on a novel decomposition of equivariant *distributions*. Concretely, this decomposition yields a practical method for breaking symmetries in any equivariant network via randomized *canonicalization*, while retaining the inductive bias of symmetry. We experimentally show that our framework improves the performance of group-equivariant methods in modeling lattice spin systems and autoencoding graphs.
Cite
Text
Lawrence et al. "Improving Equivariant Networks with Probabilistic Symmetry Breaking." ICML 2024 Workshops: GRaM, 2024.Markdown
[Lawrence et al. "Improving Equivariant Networks with Probabilistic Symmetry Breaking." ICML 2024 Workshops: GRaM, 2024.](https://mlanthology.org/icmlw/2024/lawrence2024icmlw-improving/)BibTeX
@inproceedings{lawrence2024icmlw-improving,
title = {{Improving Equivariant Networks with Probabilistic Symmetry Breaking}},
author = {Lawrence, Hannah and Portilheiro, Vasco and Zhang, Yan and Kaba, Sékou-Oumar},
booktitle = {ICML 2024 Workshops: GRaM},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/lawrence2024icmlw-improving/}
}