Kernel Multi-Task Learning Using Task-Specific Features

Abstract

In this paper we are concerned with multitask learning when task-specific features are available. We describe two ways of achieving this using Gaussian process predictors: in the first method, the data from all tasks is combined into one dataset, making use of the task-specific features. In the second method we train specific predictors for each reference task, and then combine their predictions using a gating network. We demonstrate these methods on a compiler performance prediction problem, where a task is defined as predicting the speed-up obtained when applying a sequence of code transformations to a given program.

Cite

Text

Bonilla et al. "Kernel Multi-Task Learning Using Task-Specific Features." Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, 2007.

Markdown

[Bonilla et al. "Kernel Multi-Task Learning Using Task-Specific Features." Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, 2007.](https://mlanthology.org/aistats/2007/bonilla2007aistats-kernel/)

BibTeX

@inproceedings{bonilla2007aistats-kernel,
  title     = {{Kernel Multi-Task Learning Using Task-Specific Features}},
  author    = {Bonilla, Edwin V. and Agakov, Felix V. and Williams, Christopher K. I.},
  booktitle = {Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics},
  year      = {2007},
  pages     = {43-50},
  volume    = {2},
  url       = {https://mlanthology.org/aistats/2007/bonilla2007aistats-kernel/}
}