Discrepancy-Based Networks for Unsupervised Domain Adaptation: A Comparative Study
Abstract
Domain Adaptation (DA) exploits labeled data and models from similar domains in order to alleviate the annotation burden when learning a model in a new domain. Our contribution to the field is three-fold. First, we propose a new dataset, LandMarkDA, to study the adaptation between landmark place recognition models trained with different artistic image styles, such as photos, paintings and drawings. The new LandMarkDA proposes new adaptation challenges, where current deep architectures show their limits. Second, we propose an experimental study of recent shallow and deep adaptation networks, based on using Maximum Mean Discrepancy to bridge the domain gap. We study different design choices for these models by varying the network architectures and evaluate them on OFF31 and the new LandMarkDA collections. We show that shallow networks can still be competitive under an appropriate feature extraction. Finally, we also benchmark a new DA method that successfully combines the artistic image style-transfer with deep discrepancy-based networks.
Cite
Text
Csurka et al. "Discrepancy-Based Networks for Unsupervised Domain Adaptation: A Comparative Study." IEEE/CVF International Conference on Computer Vision Workshops, 2017. doi:10.1109/ICCVW.2017.312Markdown
[Csurka et al. "Discrepancy-Based Networks for Unsupervised Domain Adaptation: A Comparative Study." IEEE/CVF International Conference on Computer Vision Workshops, 2017.](https://mlanthology.org/iccvw/2017/csurka2017iccvw-discrepancybased/) doi:10.1109/ICCVW.2017.312BibTeX
@inproceedings{csurka2017iccvw-discrepancybased,
title = {{Discrepancy-Based Networks for Unsupervised Domain Adaptation: A Comparative Study}},
author = {Csurka, Gabriela and Baradel, Fabien and Chidlovskii, Boris and Clinchant, Stéphane},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2017},
pages = {2630-2636},
doi = {10.1109/ICCVW.2017.312},
url = {https://mlanthology.org/iccvw/2017/csurka2017iccvw-discrepancybased/}
}