Self-Ensembling for Visual Domain Adaptation

Abstract

This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant (Tarvainen et. al 2017) of temporal ensembling (Laine et al. 2017), a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA-2017 visual domain adaptation challenge. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion.

Cite

Text

French et al. "Self-Ensembling for Visual Domain Adaptation." International Conference on Learning Representations, 2018.

Markdown

[French et al. "Self-Ensembling for Visual Domain Adaptation." International Conference on Learning Representations, 2018.](https://mlanthology.org/iclr/2018/french2018iclr-selfensembling/)

BibTeX

@inproceedings{french2018iclr-selfensembling,
  title     = {{Self-Ensembling for Visual Domain Adaptation}},
  author    = {French, Geoff and Mackiewicz, Michal and Fisher, Mark},
  booktitle = {International Conference on Learning Representations},
  year      = {2018},
  url       = {https://mlanthology.org/iclr/2018/french2018iclr-selfensembling/}
}