Wasserstein Transfer Learning

Abstract

Transfer learning is a powerful paradigm for leveraging knowledge from source domains to enhance learning in a target domain. However, traditional transfer learning approaches often focus on scalar or multivariate data within Euclidean spaces, limiting their applicability to complex data structures such as probability distributions. To address this, we introduce a novel framework for transfer learning in regression models, where outputs are probability distributions residing in the Wasserstein space. When the informative subset of transferable source domains is known, we propose an estimator with provable asymptotic convergence rates, quantifying the impact of domain similarity on transfer efficiency. For cases where the informative subset is unknown, we develop a data-driven transfer learning procedure designed to mitigate negative transfer. The proposed methods are supported by rigorous theoretical analysis and are validated through extensive simulations and real-world applications.

Cite

Text

Zhang et al. "Wasserstein Transfer Learning." Advances in Neural Information Processing Systems, 2025.

Markdown

[Zhang et al. "Wasserstein Transfer Learning." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/zhang2025neurips-wasserstein/)

BibTeX

@inproceedings{zhang2025neurips-wasserstein,
  title     = {{Wasserstein Transfer Learning}},
  author    = {Zhang, Kaicheng and Zhang, Sinian and Zhou, Doudou and Zhou, Yidong},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/zhang2025neurips-wasserstein/}
}