Robustmix: Improving Robustness by Regularizing the Frequency Bias of Deep Nets
Abstract
Deep networks have achieved impressive results on a range of well curated benchmark datasets. Surprisingly, their performance remains sensitive to perturbations that have little effect on human performance. In this work, we propose a novel extension of Mixup called Robustmix that regularizes networks to classify based on lower frequency spatial features. We show that this type of regularization improves robustness on a range of benchmarks such as Imagenet-C and Stylized Imagenet. It adds little computational overhead and furthermore does not require a priori knowledge of a large set of image transformations. We find that this approach further complements recent advances in model architecture and data augmentation attaining a state-of-the-art mCE of 44.8 with an EfficientNet-B8 model and RandAugment, which is a reduction of 16 mCE compared to the baseline.
Cite
Text
Ngnawe et al. "Robustmix: Improving Robustness by Regularizing the Frequency Bias of Deep Nets." NeurIPS 2022 Workshops: DistShift, 2022.Markdown
[Ngnawe et al. "Robustmix: Improving Robustness by Regularizing the Frequency Bias of Deep Nets." NeurIPS 2022 Workshops: DistShift, 2022.](https://mlanthology.org/neuripsw/2022/ngnawe2022neuripsw-robustmix/)BibTeX
@inproceedings{ngnawe2022neuripsw-robustmix,
title = {{Robustmix: Improving Robustness by Regularizing the Frequency Bias of Deep Nets}},
author = {Ngnawe, Jonas and Njifon, Marianne ABEMGNIGNI and Heek, Jonathan and Dauphin, Yann},
booktitle = {NeurIPS 2022 Workshops: DistShift},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/ngnawe2022neuripsw-robustmix/}
}