Alias-Free Convnets: Fractional Shift Invariance via Polynomial Activations
Abstract
Although CNNs are believed to be invariant to translations, recent works have shown this is not the case due to aliasing effects that stem from down-sampling layers. The existing architectural solutions to prevent the aliasing effects are partial since they do not solve those effects that originate in non-linearities. We propose an extended anti-aliasing method that tackles both down-sampling and non-linear layers, thus creating truly alias-free, shift-invariant CNNs. We show that the presented model is invariant to integer as well as fractional (i.e., sub-pixel) translations, thus outperforming other shift-invariant methods in terms of robustness to adversarial translations.
Cite
Text
Michaeli et al. "Alias-Free Convnets: Fractional Shift Invariance via Polynomial Activations." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.01567Markdown
[Michaeli et al. "Alias-Free Convnets: Fractional Shift Invariance via Polynomial Activations." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/michaeli2023cvpr-aliasfree/) doi:10.1109/CVPR52729.2023.01567BibTeX
@inproceedings{michaeli2023cvpr-aliasfree,
title = {{Alias-Free Convnets: Fractional Shift Invariance via Polynomial Activations}},
author = {Michaeli, Hagay and Michaeli, Tomer and Soudry, Daniel},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2023},
pages = {16333-16342},
doi = {10.1109/CVPR52729.2023.01567},
url = {https://mlanthology.org/cvpr/2023/michaeli2023cvpr-aliasfree/}
}