A Difference Standardization Method for Mutual Transfer Learning
Abstract
In many real-world applications, mutual transfer learning is the paradigm that each data domain can potentially be a source or target domain. This is quite different from transfer learning tasks where the source and target are known a priori. However, previous studies about mutual transfer learning either suffer from high computational complexity or oversimplified hypothesis. To overcome these challenges, in this paper, we propose the \underline{Diff}erence \underline{S}tandardization method ({\bf DiffS}) for mutual transfer learning. Specifically, we put forward a novel distance metric between domains, the standardized domain difference, to obtain fast structure recovery and accurate parameter estimation simultaneously. We validate the method’s performance using both synthetic and real-world data. Compared to previous methods, DiffS demonstrates a speed-up of approximately 3000 times that of similar methods and achieves the same accurate learnability structure estimation.
Cite
Text
Xu et al. "A Difference Standardization Method for Mutual Transfer Learning." International Conference on Machine Learning, 2022.Markdown
[Xu et al. "A Difference Standardization Method for Mutual Transfer Learning." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/xu2022icml-difference/)BibTeX
@inproceedings{xu2022icml-difference,
title = {{A Difference Standardization Method for Mutual Transfer Learning}},
author = {Xu, Haoqing and Wang, Meng and Wang, Beilun},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {24683-24697},
volume = {162},
url = {https://mlanthology.org/icml/2022/xu2022icml-difference/}
}