Flexible Latent Variable Models for Multi-Task Learning
Abstract
Given multiple prediction problems such as regression or classification, we are interested in a joint inference framework that can effectively share information between tasks to improve the prediction accuracy, especially when the number of training examples per problem is small. In this paper we propose a probabilistic framework which can support a set of latent variable models for different multi-task learning scenarios. We show that the framework is a generalization of standard learning methods for single prediction problems and it can effectively model the shared structure among different prediction tasks. Furthermore, we present efficient algorithms for the empirical Bayes method as well as point estimation. Our experiments on both simulated datasets and real world classification datasets show the effectiveness of the proposed models in two evaluation settings: a standard multi-task learning setting and a transfer learning setting.
Cite
Text
Zhang et al. "Flexible Latent Variable Models for Multi-Task Learning." Machine Learning, 2008. doi:10.1007/S10994-008-5050-1Markdown
[Zhang et al. "Flexible Latent Variable Models for Multi-Task Learning." Machine Learning, 2008.](https://mlanthology.org/mlj/2008/zhang2008mlj-flexible/) doi:10.1007/S10994-008-5050-1BibTeX
@article{zhang2008mlj-flexible,
title = {{Flexible Latent Variable Models for Multi-Task Learning}},
author = {Zhang, Jian and Ghahramani, Zoubin and Yang, Yiming},
journal = {Machine Learning},
year = {2008},
pages = {221-242},
doi = {10.1007/S10994-008-5050-1},
volume = {73},
url = {https://mlanthology.org/mlj/2008/zhang2008mlj-flexible/}
}