Kernels for Multi--Task Learning

Abstract

This paper provides a foundation for multitask learning using reproducing ker- nel Hilbert spaces of vectorvalued functions. In this setting, the kernel is a matrixvalued function. Some explicit examples will be described which go be- yond our earlier results in [7]. In particular, we characterize classes of matrix valued kernels which are linear and are of the dot product or the translation invari- ant type. We discuss how these kernels can be used to model relations between the tasks and present linear multitask learning algorithms. Finally, we present a novel proof of the representer theorem for a minimizer of a regularization func- tional which is based on the notion of minimal norm interpolation.

Cite

Text

Micchelli and Pontil. "Kernels for Multi--Task Learning." Neural Information Processing Systems, 2004.

Markdown

[Micchelli and Pontil. "Kernels for Multi--Task Learning." Neural Information Processing Systems, 2004.](https://mlanthology.org/neurips/2004/micchelli2004neurips-kernels/)

BibTeX

@inproceedings{micchelli2004neurips-kernels,
  title     = {{Kernels for Multi--Task Learning}},
  author    = {Micchelli, Charles A. and Pontil, Massimiliano},
  booktitle = {Neural Information Processing Systems},
  year      = {2004},
  pages     = {921-928},
  url       = {https://mlanthology.org/neurips/2004/micchelli2004neurips-kernels/}
}