Neural Symmetry Detection for Learning Neural Network Constraints
Abstract
Neural symmetry detection can be defined as the deep learning-aided task of recovering both the nature of the transformation that relates points in a data set and the distribution with respect to the magnitude of the transformation. Applications range from automatic data augmentation to model selection. In this work, we investigate how the matrix exponential can be leveraged to recover the correct symmetry transformation, encoded as a generator of a Lie group for various transformations, both affine and non-affine. In order to make the calculation of the matrix exponential tractable, this operation is performed in a low-dimensional latent space. Additionally, a loss term is introduced to enforce matching the generator in latent space to the one in pixel-space.
Cite
Text
Gabel et al. "Neural Symmetry Detection for Learning Neural Network Constraints." ICML 2024 Workshops: HiLD, 2024.Markdown
[Gabel et al. "Neural Symmetry Detection for Learning Neural Network Constraints." ICML 2024 Workshops: HiLD, 2024.](https://mlanthology.org/icmlw/2024/gabel2024icmlw-neural/)BibTeX
@inproceedings{gabel2024icmlw-neural,
title = {{Neural Symmetry Detection for Learning Neural Network Constraints}},
author = {Gabel, Alex and Quax, Rick and Gavves, Stratis},
booktitle = {ICML 2024 Workshops: HiLD},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/gabel2024icmlw-neural/}
}