A Unified Perspective on Multi-Domain and Multi-Task Learning
Abstract
In this paper, we provide a new neural-network based perspective on multi-task learning (MTL) and multi-domain learning (MDL). By introducing the concept of a semantic descriptor, this framework unifies MDL and MTL as well as encompassing various classic and recent MTL/MDL algorithms by interpreting them as different ways of constructing semantic descriptors. Our interpretation provides an alternative pipeline for zero-shot learning (ZSL), where a model for a novel class can be constructed without training data. Moreover, it leads to a new and practically relevant problem setting of zero-shot domain adaptation (ZSDA), which is the analogous to ZSL but for novel domains: A model for an unseen domain can be generated by its semantic descriptor. Experiments across this range of problems demonstrate that our framework outperforms a variety of alternatives.
Cite
Text
Yang and Hospedales. "A Unified Perspective on Multi-Domain and Multi-Task Learning." International Conference on Learning Representations, 2015.Markdown
[Yang and Hospedales. "A Unified Perspective on Multi-Domain and Multi-Task Learning." International Conference on Learning Representations, 2015.](https://mlanthology.org/iclr/2015/yang2015iclr-unified/)BibTeX
@inproceedings{yang2015iclr-unified,
title = {{A Unified Perspective on Multi-Domain and Multi-Task Learning}},
author = {Yang, Yongxin and Hospedales, Timothy M.},
booktitle = {International Conference on Learning Representations},
year = {2015},
url = {https://mlanthology.org/iclr/2015/yang2015iclr-unified/}
}