Flopping for FLOPs: Leveraging Equivariance for Computational Efficiency
Abstract
Incorporating geometric invariance into neural networks enhances parameter efficiency but typically increases computational costs. This paper introduces new equivariant neural networks that preserve symmetry while maintaining a comparable number of floating-point operations (FLOPs) per parameter to standard non-equivariant networks. We focus on horizontal mirroring (flopping) invariance, common in many computer vision tasks. The main idea is to parametrize the feature spaces in terms of mirror-symmetric and mirror-antisymmetric features, i.e., irreps of the flopping group. This decomposes the linear layers to be block-diagonal, requiring half the number of FLOPs. Our approach reduces both FLOPs and wall-clock time, providing a practical solution for efficient, scalable symmetry-aware architectures.
Cite
Text
Bökman et al. "Flopping for FLOPs: Leveraging Equivariance for Computational Efficiency." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Bökman et al. "Flopping for FLOPs: Leveraging Equivariance for Computational Efficiency." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/bokman2025icml-flopping/)BibTeX
@inproceedings{bokman2025icml-flopping,
title = {{Flopping for FLOPs: Leveraging Equivariance for Computational Efficiency}},
author = {Bökman, Georg and Nordström, David and Kahl, Fredrik},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {4823-4838},
volume = {267},
url = {https://mlanthology.org/icml/2025/bokman2025icml-flopping/}
}