Sample Efficiency of Data Augmentation Consistency Regularization
Abstract
Data augmentation is popular in the training of large neural networks; however, currently, theoretical understanding of the discrepancy between different algorithmic choices of leveraging augmented data remains limited. In this paper, we take a step in this direction – we first present a simple and novel analysis for linear regression with label invariant augmentations, demonstrating that data augmentation consistency (DAC) is intrinsically more efficient than empirical risk minimization on augmented data (DA-ERM). The analysis is then generalized to misspecified augmentations (i.e., augmentations that change the labels), which again demonstrates the merit of DAC over DA-ERM. Further, we extend our analysis to non-linear models (e.g., neural networks) and present generalization bounds. Finally, we perform experiments that make a clean and apples-to-apples comparison (i.e., with no extra modeling or data tweaks) between DAC and DA-ERM using CIFAR-100 and WideResNet; these together demonstrate the superior efficacy of DAC.
Cite
Text
Yang et al. "Sample Efficiency of Data Augmentation Consistency Regularization." Artificial Intelligence and Statistics, 2023.Markdown
[Yang et al. "Sample Efficiency of Data Augmentation Consistency Regularization." Artificial Intelligence and Statistics, 2023.](https://mlanthology.org/aistats/2023/yang2023aistats-sample/)BibTeX
@inproceedings{yang2023aistats-sample,
title = {{Sample Efficiency of Data Augmentation Consistency Regularization}},
author = {Yang, Shuo and Dong, Yijun and Ward, Rachel and Dhillon, Inderjit S. and Sanghavi, Sujay and Lei, Qi},
booktitle = {Artificial Intelligence and Statistics},
year = {2023},
pages = {3825-3853},
volume = {206},
url = {https://mlanthology.org/aistats/2023/yang2023aistats-sample/}
}