Semantic Perturbations with Normalizing Flows for Improved Generalization
Abstract
Several methods from two separate lines of works, namely, data augmentation (DA) and adversarial training techniques, rely on perturbations done in latent space. Often, these methods are either non-interpretable due to their non-invertibility or are notoriously difficult to train due to their numerous hyperparameters. We exploit the exactly reversible encoder-decoder structure of normalizing flows to perform perturbations in the latent space. We demonstrate that these on-manifold perturbations match the performance of advanced DA techniques---reaching $96.6\%$ test accuracy for CIFAR-10 using ResNet-18 and outperform existing methods particularly in low data regimes---yielding $10$--$25\%$ relative improvement of test accuracy from classical training. We find our latent adversarial perturbations, adaptive to the classifier throughout its training, are most effective.
Cite
Text
Yüksel et al. "Semantic Perturbations with Normalizing Flows for Improved Generalization." ICML 2021 Workshops: INNF, 2021.Markdown
[Yüksel et al. "Semantic Perturbations with Normalizing Flows for Improved Generalization." ICML 2021 Workshops: INNF, 2021.](https://mlanthology.org/icmlw/2021/yuksel2021icmlw-semantic/)BibTeX
@inproceedings{yuksel2021icmlw-semantic,
title = {{Semantic Perturbations with Normalizing Flows for Improved Generalization}},
author = {Yüksel, Oğuz Kaan and Stich, Sebastian U and Jaggi, Martin and Chavdarova, Tatjana},
booktitle = {ICML 2021 Workshops: INNF},
year = {2021},
url = {https://mlanthology.org/icmlw/2021/yuksel2021icmlw-semantic/}
}