The Space Between: On Folding, Symmetries and Sampling
Abstract
Recent findings suggest that consecutive layers of neural networks with the ReLU activation function \emph{fold} the input space during the learning process. While many works hint at this phenomenon, an approach to quantify the folding was only recently proposed by means of a space folding measure based on Hamming distance in the ReLU activation space. We generalize this measure to a wider class of activation functions through introduction of equivalence classes of input data, analyse its mathematical and computational properties and come up with an efficient sampling strategy for its implementation. Moreover, it has been observed that space folding values increase with network depth when the generalization error is low, but decrease when the error increases. This underpins that learned symmetries in the data manifold (e.g., invariance under reflection) become visible in terms of space folds, contributing to the network's generalization capacity. Inspired by these findings, we outline a novel regularization scheme that encourages the network to seek solutions characterized by higher folding values.
Cite
Text
Lewandowski et al. "The Space Between: On Folding, Symmetries and Sampling." ICLR 2025 Workshops: DeLTa, 2025.Markdown
[Lewandowski et al. "The Space Between: On Folding, Symmetries and Sampling." ICLR 2025 Workshops: DeLTa, 2025.](https://mlanthology.org/iclrw/2025/lewandowski2025iclrw-space/)BibTeX
@inproceedings{lewandowski2025iclrw-space,
title = {{The Space Between: On Folding, Symmetries and Sampling}},
author = {Lewandowski, Michal and Heinzl, Bernhard and Pisoni, Raphael and Moser, Bernhard A.},
booktitle = {ICLR 2025 Workshops: DeLTa},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/lewandowski2025iclrw-space/}
}