Theoretical Guarantees for Domain Adaptation with Hierarchical Optimal Transport

Abstract

Domain adaptation arises as an important problem in statistical learning theory, arising when the data-generating processes differ between the training and test samples, respectively called source and target domains. Recent theoretical advances have demonstrated that the success of domain adaptation algorithms heavily relies on their ability to minimize the divergence between the probability distributions of the source and target domains. However, minimizing this divergence cannot be achieved independently of other key ingredients, such as the source risk or the combined error of the ideal joint hypothesis. The trade-off between these terms is often ensured through algorithmic solutions that remain implicit and are not directly reflected by the theoretical guarantees. To get to the bottom of this issue, we propose in this paper a new theoretical framework for domain adaptation through hierarchical optimal transport. This framework provides more explicit generalization bounds and enables us to consider the natural hierarchical organization of samples in both domains into structures, i.e. classes or clusters. Additionally, we provide a new divergence measure between the source and target domains called Hierarchical Wasserstein distance that indicates under mild assumptions, which structures need to be aligned to achieve successful adaptation.

Cite

Text

El Hamri et al. "Theoretical Guarantees for Domain Adaptation with Hierarchical Optimal Transport." Machine Learning, 2025. doi:10.1007/S10994-025-06749-6

Markdown

[El Hamri et al. "Theoretical Guarantees for Domain Adaptation with Hierarchical Optimal Transport." Machine Learning, 2025.](https://mlanthology.org/mlj/2025/hamri2025mlj-theoretical/) doi:10.1007/S10994-025-06749-6

BibTeX

@article{hamri2025mlj-theoretical,
  title     = {{Theoretical Guarantees for Domain Adaptation with Hierarchical Optimal Transport}},
  author    = {El Hamri, Mourad and Bennani, Younès and Falih, Issam},
  journal   = {Machine Learning},
  year      = {2025},
  pages     = {119},
  doi       = {10.1007/S10994-025-06749-6},
  volume    = {114},
  url       = {https://mlanthology.org/mlj/2025/hamri2025mlj-theoretical/}
}