Split Batch Normalization: Improving Semi-Supervised Learning Under Domain Shift
Abstract
Recent work has shown that using unlabeled data in semi-supervised learning is not always beneficial and can even hurt generalization, especially when there is a class mismatch between the unlabeled and labeled examples. We investigate this phenomenon for image classification and many other forms of domain shifts (e.g. salt-and-pepper noise). Our main contribution is showing how to benefit from additional unlabeled data that comes from a shifted distribution in batch-normalized neural networks. We achieve it by simply using separate batch normalization statistics for unlabeled examples. Due to its simplicity, we recommend it as a standard practice.
Cite
Text
Zając et al. "Split Batch Normalization: Improving Semi-Supervised Learning Under Domain Shift." ICLR 2019 Workshops: LLD, 2019.Markdown
[Zając et al. "Split Batch Normalization: Improving Semi-Supervised Learning Under Domain Shift." ICLR 2019 Workshops: LLD, 2019.](https://mlanthology.org/iclrw/2019/zajac2019iclrw-split/)BibTeX
@inproceedings{zajac2019iclrw-split,
title = {{Split Batch Normalization: Improving Semi-Supervised Learning Under Domain Shift}},
author = {Zając, Michał and Zolna, Konrad and Jastrzębski, Stanisław},
booktitle = {ICLR 2019 Workshops: LLD},
year = {2019},
url = {https://mlanthology.org/iclrw/2019/zajac2019iclrw-split/}
}