Fourier-Based Augmentations for Improved Robustness and Uncertainty Calibration

Abstract

Diverse data augmentation strategies are a natural approach to improving robustness in computer vision models against unforeseen shifts in data distribution. However, the ability to tailor such strategies to inoculate a model against specific classes of corruptions or attacks---without incurring substantial losses in robustness against other classes of corruptions---remains elusive. In this work, we successfully harden a model against Fourier-based attacks, while producing superior-to-\texttt{AugMix} accuracy and calibration results on both the CIFAR-10-C and CIFAR-100-C datasets; classification error is reduced by over ten percentage points for some high-severity noise and digital-type corruptions. We achieve this by incorporating Fourier-basis perturbations in the \texttt{AugMix} image-augmentation framework. Thus we demonstrate that the \texttt{AugMix} framework can be tailored to effectively target particular distribution shifts, while boosting overall model robustness.

Cite

Text

Soklaski et al. "Fourier-Based Augmentations for Improved Robustness and Uncertainty Calibration." NeurIPS 2021 Workshops: DistShift, 2021.

Markdown

[Soklaski et al. "Fourier-Based Augmentations for Improved Robustness and Uncertainty Calibration." NeurIPS 2021 Workshops: DistShift, 2021.](https://mlanthology.org/neuripsw/2021/soklaski2021neuripsw-fourierbased/)

BibTeX

@inproceedings{soklaski2021neuripsw-fourierbased,
  title     = {{Fourier-Based Augmentations for Improved Robustness and Uncertainty Calibration}},
  author    = {Soklaski, Ryan and Yee, Michael and Tsiligkaridis, Theodoros},
  booktitle = {NeurIPS 2021 Workshops: DistShift},
  year      = {2021},
  url       = {https://mlanthology.org/neuripsw/2021/soklaski2021neuripsw-fourierbased/}
}