Deep Multi-Task Representation Learning: A Tensor Factorisation Approach

Abstract

Most contemporary multi-task learning methods assume linear models. This setting is considered shallow in the era of deep learning. In this paper, we present a new deep multi-task representation learning framework that learns cross-task sharing structure at every layer in a deep network. Our approach is based on generalising the matrix factorisation techniques explicitly or implicitly used by many conventional MTL algorithms to tensor factorisation, to realise automatic learning of end-to-end knowledge sharing in deep networks. This is in contrast to existing deep learning approaches that need a user-defined multi-task sharing strategy. Our approach applies to both homogeneous and heterogeneous MTL. Experiments demonstrate the efficacy of our deep multi-task representation learning in terms of both higher accuracy and fewer design choices.

Cite

Text

Yang and Hospedales. "Deep Multi-Task Representation Learning: A Tensor Factorisation Approach." International Conference on Learning Representations, 2017.

Markdown

[Yang and Hospedales. "Deep Multi-Task Representation Learning: A Tensor Factorisation Approach." International Conference on Learning Representations, 2017.](https://mlanthology.org/iclr/2017/yang2017iclr-deep/)

BibTeX

@inproceedings{yang2017iclr-deep,
  title     = {{Deep Multi-Task Representation Learning: A Tensor Factorisation Approach}},
  author    = {Yang, Yongxin and Hospedales, Timothy M.},
  booktitle = {International Conference on Learning Representations},
  year      = {2017},
  url       = {https://mlanthology.org/iclr/2017/yang2017iclr-deep/}
}