Completely Heterogeneous Transfer Learning with Attention - What and What Not to Transfer
Abstract
We study a transfer learning framework where source and target datasets are heterogeneous in both feature and label spaces. Specifically, we do not assume explicit relations between source and target tasks a priori, and thus it is crucial to determine what and what not to transfer from source knowledge. Towards this goal, we define a new heterogeneous transfer learning approach that (1) selects and attends to an optimized subset of source samples to transfer knowledge from, and (2) builds a unified transfer network that learns from both source and target knowledge. This method, termed "Attentional Heterogeneous Transfer", along with a newly proposed unsupervised transfer loss, improve upon the previous state-of-the-art approaches on extensive simulations as well as a challenging hetero-lingual text classification task.
Cite
Text
Moon and Carbonell. "Completely Heterogeneous Transfer Learning with Attention - What and What Not to Transfer." International Joint Conference on Artificial Intelligence, 2017. doi:10.24963/IJCAI.2017/349Markdown
[Moon and Carbonell. "Completely Heterogeneous Transfer Learning with Attention - What and What Not to Transfer." International Joint Conference on Artificial Intelligence, 2017.](https://mlanthology.org/ijcai/2017/moon2017ijcai-completely/) doi:10.24963/IJCAI.2017/349BibTeX
@inproceedings{moon2017ijcai-completely,
title = {{Completely Heterogeneous Transfer Learning with Attention - What and What Not to Transfer}},
author = {Moon, Seungwhan and Carbonell, Jaime G.},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2017},
pages = {2508-2514},
doi = {10.24963/IJCAI.2017/349},
url = {https://mlanthology.org/ijcai/2017/moon2017ijcai-completely/}
}