Robust Domain Adaptation: Representations, Weights and Inductive Bias

Abstract

Domain Invariant Representations (IR) has improved drastically the transferability of representations from a labelled source domain to a new and unlabelled target domain. Unsupervised Domain Adaptation (UDA) in presence of label shift remains an open problem. To this purpose, we present a bound of the target risk which incorporates both weights and invariant representations. Our theoretical analysis highlights the role of inductive bias in aligning distributions across domains. We illustrate it on standard benchmarks by proposing a new learning procedure for UDA. We observed empirically that weak inductive bias makes adaptation robust to label shift. The elaboration of stronger inductive bias is a promising direction for new UDA algorithms.

Cite

Text

Bouvier et al. "Robust Domain Adaptation: Representations, Weights and Inductive Bias." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2020. doi:10.1007/978-3-030-67658-2_21

Markdown

[Bouvier et al. "Robust Domain Adaptation: Representations, Weights and Inductive Bias." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2020.](https://mlanthology.org/ecmlpkdd/2020/bouvier2020ecmlpkdd-robust/) doi:10.1007/978-3-030-67658-2_21

BibTeX

@inproceedings{bouvier2020ecmlpkdd-robust,
  title     = {{Robust Domain Adaptation: Representations, Weights and Inductive Bias}},
  author    = {Bouvier, Victor and Very, Philippe and Chastagnol, Clément and Tami, Myriam and Hudelot, Céline},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2020},
  pages     = {353-377},
  doi       = {10.1007/978-3-030-67658-2_21},
  url       = {https://mlanthology.org/ecmlpkdd/2020/bouvier2020ecmlpkdd-robust/}
}