Remix: Rebalanced Mixup
Abstract
Deep image classifiers often perform poorly when training data are heavily class-imbalanced. In this work, we propose a new regularization technique, Remix, that relaxes Mixup's formulation and enables the mixing factors of features and labels to be disentangled. Specifically, when mixing two samples, while features are mixed in the same fashion as Mixup, Remix assigns the label in favor of the minority class by providing a disproportionately higher weight to the minority class. By doing so, the classifier learns to push the decision boundaries towards the majority classes and balance the generalization error between majority and minority classes. We have studied the state-of-the art regularization techniques such as Mixup, Manifold Mixup and CutMix under class-imbalanced regime, and shown that the proposed Remix significantly outperforms these state-of-the-arts and several re-weighting and re-sampling techniques, on the imbalanced datasets constructed by CIFAR-10, CIFAR-100, and CINIC-10. We have also evaluated Remix on a real-world large-scale imbalanced dataset, iNaturalist 2018. The experimental results confirmed that Remix provides consistent and significant improvements over the previous methods.
Cite
Text
Chou et al. "Remix: Rebalanced Mixup." European Conference on Computer Vision Workshops, 2020. doi:10.1007/978-3-030-65414-6_9Markdown
[Chou et al. "Remix: Rebalanced Mixup." European Conference on Computer Vision Workshops, 2020.](https://mlanthology.org/eccvw/2020/chou2020eccvw-remix/) doi:10.1007/978-3-030-65414-6_9BibTeX
@inproceedings{chou2020eccvw-remix,
title = {{Remix: Rebalanced Mixup}},
author = {Chou, Hsin-Ping and Chang, Shih-Chieh and Pan, Jia-Yu and Wei, Wei and Juan, Da-Cheng},
booktitle = {European Conference on Computer Vision Workshops},
year = {2020},
pages = {95-110},
doi = {10.1007/978-3-030-65414-6_9},
url = {https://mlanthology.org/eccvw/2020/chou2020eccvw-remix/}
}