Equivariance with Learned Canonicalization Functions
Abstract
Symmetry-based neural networks often constrain the architecture in order to achieve invariance or equivariance to a group of transformations. In this paper, we propose an alternative that avoids this architectural constraint by learning to produce a canonical representation of the data. These canonicalization functions can readily be plugged into non-equivariant backbone architectures. We offer explicit ways to implement them for many groups of interest. We show that this approach enjoys universality while providing interpretable insights. Our main hypothesis is that learning a neural network to perform canonicalization is better than doing it using predefined heuristics. Our results show that learning the canonicalization function indeed leads to better results and that the approach achieves great performance in practice.
Cite
Text
Kaba et al. "Equivariance with Learned Canonicalization Functions." NeurIPS 2022 Workshops: NeurReps, 2022.Markdown
[Kaba et al. "Equivariance with Learned Canonicalization Functions." NeurIPS 2022 Workshops: NeurReps, 2022.](https://mlanthology.org/neuripsw/2022/kaba2022neuripsw-equivariance/)BibTeX
@inproceedings{kaba2022neuripsw-equivariance,
title = {{Equivariance with Learned Canonicalization Functions}},
author = {Kaba, Sékou-Oumar and Mondal, Arnab Kumar and Zhang, Yan and Bengio, Yoshua and Ravanbakhsh, Siamak},
booktitle = {NeurIPS 2022 Workshops: NeurReps},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/kaba2022neuripsw-equivariance/}
}