Learning with Whom to Share in Multi-Task Feature Learning

Abstract

In multi-task learning (MTL), multiple tasks are learnt jointly. A major assumption for this paradigm is that all those tasks are indeed related so that the joint training is appropriate and beneficial. In this paper, we study the problem of multi-task learning of shared feature representations among tasks, while simultaneously determining ``with whom'' each task should share. We formulate the problem as a mixed integer programming and provide an alternating minimization technique to solve the optimization problem of jointly identifying grouping structures and parameters. The algorithm monotonically decreases the objective function and converges to a local optimum. Compared to the standard MTL paradigm where all tasks are in a single group, our algorithm improves its performance with statistical significance for three out of the four datasets we have studied. We also demonstrate its advantage over other task grouping techniques investigated in literature.

Cite

Text

Kang et al. "Learning with Whom to Share in Multi-Task Feature Learning." International Conference on Machine Learning, 2011.

Markdown

[Kang et al. "Learning with Whom to Share in Multi-Task Feature Learning." International Conference on Machine Learning, 2011.](https://mlanthology.org/icml/2011/kang2011icml-learning/)

BibTeX

@inproceedings{kang2011icml-learning,
  title     = {{Learning with Whom to Share in Multi-Task Feature Learning}},
  author    = {Kang, Zhuoliang and Grauman, Kristen and Sha, Fei},
  booktitle = {International Conference on Machine Learning},
  year      = {2011},
  pages     = {521-528},
  url       = {https://mlanthology.org/icml/2011/kang2011icml-learning/}
}