Disentangled Image Generation for Unsupervised Domain Adaptation

Abstract

We explore the use of generative modeling in unsupervised domain adaptation (UDA), where annotated real images are only available in the source domain, and pseudo images are generated in a manner that allows independent control of class (content) and nuisance variability (style). The proposed method differs from existing generative UDA models in that we explicitly disentangle the content and nuisance features at different layers of the generator network. We demonstrate the effectiveness of (pseudo)-conditional generation by showing that it improves upon baseline methods. Moreover, we outperform the previous state-of-the-art with significant margins in recently introduced multi-source domain adaptation (MSDA) tasks, achieving significant error reduction rates of $50.27 \%$ 50.27 % , $89.54 \%$ 89.54 % , $75.35 \%$ 75.35 % , $27.46 \%$ 27.46 % and $94.3 \%$ 94.3 % in all 5 tasks.

Cite

Text

Cicek et al. "Disentangled Image Generation for Unsupervised Domain Adaptation." European Conference on Computer Vision Workshops, 2020. doi:10.1007/978-3-030-66415-2_44

Markdown

[Cicek et al. "Disentangled Image Generation for Unsupervised Domain Adaptation." European Conference on Computer Vision Workshops, 2020.](https://mlanthology.org/eccvw/2020/cicek2020eccvw-disentangled/) doi:10.1007/978-3-030-66415-2_44

BibTeX

@inproceedings{cicek2020eccvw-disentangled,
  title     = {{Disentangled Image Generation for Unsupervised Domain Adaptation}},
  author    = {Cicek, Safa and Xu, Ning and Wang, Zhaowen and Jin, Hailin and Soatto, Stefano},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2020},
  pages     = {662-665},
  doi       = {10.1007/978-3-030-66415-2_44},
  url       = {https://mlanthology.org/eccvw/2020/cicek2020eccvw-disentangled/}
}