Multi-Task and Lifelong Learning of Kernels

Abstract

We consider a problem of learning kernels for use in SVM classification in the multi-task and lifelong scenarios and provide generalization bounds on the error of a large margin classifier. Our results show that, under mild conditions on the family of kernels used for learning, solving several related tasks simultaneously is beneficial over single task learning. In particular, as the number of observed tasks grows, assuming that in the considered family of kernels there exists one that yields low approximation error on all tasks, the overhead associated with learning such a kernel vanishes and the complexity converges to that of learning when this good kernel is given to the learner.

Cite

Text

Pentina and Ben-David. "Multi-Task and Lifelong Learning of Kernels." International Conference on Algorithmic Learning Theory, 2015. doi:10.1007/978-3-319-24486-0_13

Markdown

[Pentina and Ben-David. "Multi-Task and Lifelong Learning of Kernels." International Conference on Algorithmic Learning Theory, 2015.](https://mlanthology.org/alt/2015/pentina2015alt-multitask/) doi:10.1007/978-3-319-24486-0_13

BibTeX

@inproceedings{pentina2015alt-multitask,
  title     = {{Multi-Task and Lifelong Learning of Kernels}},
  author    = {Pentina, Anastasia and Ben-David, Shai},
  booktitle = {International Conference on Algorithmic Learning Theory},
  year      = {2015},
  pages     = {194-208},
  doi       = {10.1007/978-3-319-24486-0_13},
  url       = {https://mlanthology.org/alt/2015/pentina2015alt-multitask/}
}