Supervised Domain Adaptation Based on Marginal and Conditional Distributions Alignment

Abstract

Supervised domain adaptation (SDA) is an area of machine learning, where the goal is to achieve good generalization performance on data from a target domain, given a small corpus of labeled training data from the target domain and a large corpus of labeled data from a related source domain. In this work, based on a generalization of a well-known theoretical result of \citet{ben2010theory}, we propose an SDA approach, in which the adaptation is performed by aligning the marginal and conditional components of the input-label joint distributions. In addition to being theoretically grounded, we demonstrate that the proposed approach has two advantages over existing SDA approaches. First, it applies to a broad collection of learning tasks, such as regression, classification, multi-label classification, and few-shot learning. Second, it takes into account the geometric structure of the input and label spaces. Experimentally, despite its generality, our approach demonstrates on-par or superior results compared with recent state-of-the-art task-specific methods.

Cite

Text

Katz et al. "Supervised Domain Adaptation Based on Marginal and Conditional Distributions Alignment." Transactions on Machine Learning Research, 2024.

Markdown

[Katz et al. "Supervised Domain Adaptation Based on Marginal and Conditional Distributions Alignment." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/katz2024tmlr-supervised/)

BibTeX

@article{katz2024tmlr-supervised,
  title     = {{Supervised Domain Adaptation Based on Marginal and Conditional Distributions Alignment}},
  author    = {Katz, Ori and Talmon, Ronen and Shaham, Uri},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/katz2024tmlr-supervised/}
}