Learning Generative Models with Invariance to Symmetries
Abstract
While imbuing a model with invariance under symmetry transformations can improve data efficiency and predictive performance, most methods require specialised architectures and, thus, prior knowledge of the symmetries. Unfortunately, we don’t always know what symmetries are present in the data. Recent work has solved this problem by jointly learning the invariance (or the degree of invariance) with the model from the data alone. But, this work has focused on discriminative models. We describe a method for learning invariant generative models. We demonstrate that our method can learn a generative model of handwritten digits that is invariant to rotation.
Cite
Text
Allingham et al. "Learning Generative Models with Invariance to Symmetries." NeurIPS 2022 Workshops: NeurReps, 2022.Markdown
[Allingham et al. "Learning Generative Models with Invariance to Symmetries." NeurIPS 2022 Workshops: NeurReps, 2022.](https://mlanthology.org/neuripsw/2022/allingham2022neuripsw-learning/)BibTeX
@inproceedings{allingham2022neuripsw-learning,
title = {{Learning Generative Models with Invariance to Symmetries}},
author = {Allingham, James Urquhart and Antoran, Javier and Padhy, Shreyas and Nalisnick, Eric and Hernández-Lobato, José Miguel},
booktitle = {NeurIPS 2022 Workshops: NeurReps},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/allingham2022neuripsw-learning/}
}