Multi-Task Feature Learning
Abstract
We present a method for learning a low-dimensional representation which is shared across a set of multiple related tasks. The method builds upon the well- known 1-norm regularization problem using a new regularizer which controls the number of learned features common for all the tasks. We show that this problem is equivalent to a convex optimization problem and develop an iterative algorithm for solving it. The algorithm has a simple interpretation: it alternately performs a supervised and an unsupervised step, where in the latter step we learn common- across-tasks representations and in the former step we learn task-specific functions using these representations. We report experiments on a simulated and a real data set which demonstrate that the proposed method dramatically improves the per- formance relative to learning each task independently. Our algorithm can also be used, as a special case, to simply select – not learn – a few common features across the tasks.
Cite
Text
Argyriou et al. "Multi-Task Feature Learning." Neural Information Processing Systems, 2006.Markdown
[Argyriou et al. "Multi-Task Feature Learning." Neural Information Processing Systems, 2006.](https://mlanthology.org/neurips/2006/argyriou2006neurips-multitask/)BibTeX
@inproceedings{argyriou2006neurips-multitask,
title = {{Multi-Task Feature Learning}},
author = {Argyriou, Andreas and Evgeniou, Theodoros and Pontil, Massimiliano},
booktitle = {Neural Information Processing Systems},
year = {2006},
pages = {41-48},
url = {https://mlanthology.org/neurips/2006/argyriou2006neurips-multitask/}
}