Finding Symmetry in Neural Network Parameter Spaces
Abstract
Parameter space symmetries, or loss-invariant transformations, are important for understanding neural networks' loss landscape, training dynamics, and generalization. However, identifying the full set of these symmetries remains a challenge. In this paper, we formalize data-dependent parameter symmetries and derive their infinitesimal form, which enables an automated approach to discover symmetry across different architectures. Our framework systematically uncovers parameter symmetries, including previously unknown ones. We also prove that symmetries in smaller subnetworks can extend to larger networks, allowing the discovery of symmetries in small architectures to generalize to more complex models.
Cite
Text
Zhao et al. "Finding Symmetry in Neural Network Parameter Spaces." NeurIPS 2024 Workshops: UniReps, 2024.Markdown
[Zhao et al. "Finding Symmetry in Neural Network Parameter Spaces." NeurIPS 2024 Workshops: UniReps, 2024.](https://mlanthology.org/neuripsw/2024/zhao2024neuripsw-finding/)BibTeX
@inproceedings{zhao2024neuripsw-finding,
title = {{Finding Symmetry in Neural Network Parameter Spaces}},
author = {Zhao, Bo and Dehmamy, Nima and Walters, Robin and Yu, Rose},
booktitle = {NeurIPS 2024 Workshops: UniReps},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/zhao2024neuripsw-finding/}
}