Domain Generalization by Rejecting Extreme Augmentations
Abstract
Data augmentation is one of the most powerful techniques for regularizing deep learning models and improving their recognition performance in a variety of tasks and domains. However, this holds for standard in-domain settings, in which the training and test data follow the same distribution. For the out-domain, in which the test data follows a different and unknown distribution, the best recipe for data augmentation is not clear. In this paper, we show that also for out-domain or domain generalization settings, data augmentation can bring a conspicuous and robust improvement in performance. For doing that, we propose a simple procedure: i) use uniform sampling on standard data augmentation transformations ii) increase transformations strength to adapt to the higher data variance expected when working out of domain iii) devise a new reward function to reject extreme transformations that can harm the training. With this simple formula, our data augmentation scheme achieves comparable or better results to state-of-the-art performance on most domain generalization datasets.
Cite
Text
Aminbeidokhti et al. "Domain Generalization by Rejecting Extreme Augmentations." Winter Conference on Applications of Computer Vision, 2024.Markdown
[Aminbeidokhti et al. "Domain Generalization by Rejecting Extreme Augmentations." Winter Conference on Applications of Computer Vision, 2024.](https://mlanthology.org/wacv/2024/aminbeidokhti2024wacv-domain/)BibTeX
@inproceedings{aminbeidokhti2024wacv-domain,
title = {{Domain Generalization by Rejecting Extreme Augmentations}},
author = {Aminbeidokhti, Masih and Peña, Fidel A. Guerrero and Medeiros, Heitor Rapela and Dubail, Thomas and Granger, Eric and Pedersoli, Marco},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2024},
pages = {2215-2225},
url = {https://mlanthology.org/wacv/2024/aminbeidokhti2024wacv-domain/}
}