Learning Task Grouping and Overlap in Multi-Task Learning

Abstract

In the paradigm of multi-task learning, multiple related prediction tasks are learned jointly, sharing information across the tasks. We propose a framework for multi-task learning that enables one to selectively share the information across the tasks. We assume that each task parameter vector is a linear combination of a finite number of underlying basis tasks. The coefficients of the linear combination are sparse in nature and the overlap in the sparsity patterns of two tasks controls the amount of sharing across these. Our model is based on the assumption that task parameters within a group lie in a low dimensional subspace but allows the tasks in different groups to overlap with each other in one or more bases. Experimental results on four datasets show that our approach outperforms competing methods.

Cite

Text

Kumar and Iii. "Learning Task Grouping and Overlap in Multi-Task Learning." International Conference on Machine Learning, 2012.

Markdown

[Kumar and Iii. "Learning Task Grouping and Overlap in Multi-Task Learning." International Conference on Machine Learning, 2012.](https://mlanthology.org/icml/2012/kumar2012icml-learning/)

BibTeX

@inproceedings{kumar2012icml-learning,
  title     = {{Learning Task Grouping and Overlap in Multi-Task Learning}},
  author    = {Kumar, Abhishek and Iii, Hal Daumé},
  booktitle = {International Conference on Machine Learning},
  year      = {2012},
  url       = {https://mlanthology.org/icml/2012/kumar2012icml-learning/}
}