Fredholm Multiple Kernel Learning for Semi-Supervised Domain Adaptation
Abstract
As a fundamental constituent of machine learning, domain adaptation generalizes a learning model from a source domain to a different (but related) target domain. In this paper, we focus on semi-supervised domain adaptation and explicitly extend the applied range of unlabeled target samples into the combination of distribution alignment and adaptive classifier learning. Specifically, our extension formulates the following aspects in a single optimization: 1) learning a cross-domain predictive model by developing the Fredholm integral based kernel prediction framework; 2) reducing the distribution difference between two domains; 3) exploring multiple kernels to induce an optimal learning space. Correspondingly, such an extension is distinguished with allowing for noise resiliency, facilitating knowledge transfer and analyzing diverse data characteristics. It is emphasized that we prove the differentiability of our formulation and present an effective optimization procedure based on the reduced gradient, guaranteeing rapid convergence. Comprehensive empirical studies verify the effectiveness of the proposed method.
Cite
Text
Wang et al. "Fredholm Multiple Kernel Learning for Semi-Supervised Domain Adaptation." AAAI Conference on Artificial Intelligence, 2017. doi:10.1609/AAAI.V31I1.10818Markdown
[Wang et al. "Fredholm Multiple Kernel Learning for Semi-Supervised Domain Adaptation." AAAI Conference on Artificial Intelligence, 2017.](https://mlanthology.org/aaai/2017/wang2017aaai-fredholm/) doi:10.1609/AAAI.V31I1.10818BibTeX
@inproceedings{wang2017aaai-fredholm,
title = {{Fredholm Multiple Kernel Learning for Semi-Supervised Domain Adaptation}},
author = {Wang, Wei and Wang, Hao and Zhang, Chen and Gao, Yang},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2017},
pages = {2732-2738},
doi = {10.1609/AAAI.V31I1.10818},
url = {https://mlanthology.org/aaai/2017/wang2017aaai-fredholm/}
}