Population Based Augmentation: Efficient Learning of Augmentation Policy Schedules
Abstract
A key challenge in leveraging data augmentation for neural network training is choosing an effective augmentation policy from a large search space of candidate operations. Properly chosen augmentation policies can lead to significant generalization improvements; however, state-of-the-art approaches such as AutoAugment are computationally infeasible to run for the ordinary user. In this paper, we introduce a new data augmentation algorithm, Population Based Augmentation (PBA), which generates nonstationary augmentation policy schedules instead of a fixed augmentation policy. We show that PBA can match the performance of AutoAugment on CIFAR-10, CIFAR-100, and SVHN, with three orders of magnitude less overall compute. On CIFAR-10 we achieve a mean test error of 1.46%, which is a slight improvement upon the current state-of-the-art. The code for PBA is open source and is available at https://github.com/arcelien/pba.
Cite
Text
Ho et al. "Population Based Augmentation: Efficient Learning of Augmentation Policy Schedules." International Conference on Machine Learning, 2019.Markdown
[Ho et al. "Population Based Augmentation: Efficient Learning of Augmentation Policy Schedules." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/ho2019icml-population/)BibTeX
@inproceedings{ho2019icml-population,
title = {{Population Based Augmentation: Efficient Learning of Augmentation Policy Schedules}},
author = {Ho, Daniel and Liang, Eric and Chen, Xi and Stoica, Ion and Abbeel, Pieter},
booktitle = {International Conference on Machine Learning},
year = {2019},
pages = {2731-2741},
volume = {97},
url = {https://mlanthology.org/icml/2019/ho2019icml-population/}
}