Unsupervised Domain Adaptation via Minimized Joint Error

Abstract

Unsupervised domain adaptation transfers knowledge from a fully labeled source domain to a different target domain, where no labeled data are available. Some researchers have proposed upper bounds for the target error when transferring knowledge. For example, Ben-David et al. (2010) established a theory based on minimizing the source error and distance between marginal distributions simultaneously. However, in most research, the joint error is ignored because of its intractability. In this research, we argue that joint errors are essential for domain adaptation problems, particularly when the domain gap is large. To address this problem, we propose a novel objective related to the upper bound of the joint error. Moreover, we adopt a source/pseudo-target label-induced hypothesis space that can reduce the search space to further tighten this bound. To measure the dissimilarity between hypotheses, we define a novel cross-margin discrepancy to alleviate instability during adversarial learning. In addition, we present extensive empirical evidence showing that the proposed method boosts the performance of image classification accuracy on standard domain adaptation benchmarks.

Cite

Text

Zhang et al. "Unsupervised Domain Adaptation via Minimized Joint Error." Transactions on Machine Learning Research, 2023.

Markdown

[Zhang et al. "Unsupervised Domain Adaptation via Minimized Joint Error." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/zhang2023tmlr-unsupervised/)

BibTeX

@article{zhang2023tmlr-unsupervised,
  title     = {{Unsupervised Domain Adaptation via Minimized Joint Error}},
  author    = {Zhang, Dexuan and Westfechtel, Thomas and Harada, Tatsuya},
  journal   = {Transactions on Machine Learning Research},
  year      = {2023},
  url       = {https://mlanthology.org/tmlr/2023/zhang2023tmlr-unsupervised/}
}