Group Downsampling with Equivariant Anti-Aliasing
Abstract
Downsampling layers are crucial building blocks in CNN architectures, which help to increase the receptive field for learning high-level features and reduce the amount of memory/computation in the model. In this work, we study the generalization of the uniform downsampling layer for group equivariant architectures, e.g., $G$-CNNs. That is, we aim to downsample signals (feature maps) on general finite groups *with* anti-aliasing. This involves the following: **(a)** Given a finite group and a downsampling rate, we present an algorithm to form a suitable choice of subgroup. **(b)** Given a group and a subgroup, we study the notion of bandlimited-ness and propose how to perform anti-aliasing. Notably, our method generalizes the notion of downsampling based on classical sampling theory. When the signal is on a cyclic group, i.e., periodic, our method recovers the standard downsampling of an ideal low-pass filter followed by a subsampling operation. Finally, we conducted experiments on image classification tasks demonstrating that the proposed downsampling operation improves accuracy, better preserves equivariance, and reduces model size when incorporated into $G$-equivariant networks
Cite
Text
Rahman and Yeh. "Group Downsampling with Equivariant Anti-Aliasing." International Conference on Learning Representations, 2025.Markdown
[Rahman and Yeh. "Group Downsampling with Equivariant Anti-Aliasing." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/rahman2025iclr-group/)BibTeX
@inproceedings{rahman2025iclr-group,
title = {{Group Downsampling with Equivariant Anti-Aliasing}},
author = {Rahman, Md Ashiqur and Yeh, Raymond A.},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/rahman2025iclr-group/}
}