Task Clustering and Gating for Bayesian Multitask Learning
Abstract
Modeling a collection of similar regression or classification tasks can be improved by making the tasks 'learn from each other'. In machine learning, this subject is approached through 'multitask learning', where parallel tasks are modeled as multiple outputs of the same network. In multilevel analysis this is generally implemented through the mixed-effects linear model where a distinction is made between 'fixed effects', which are the same for all tasks, and 'random effects', which may vary between tasks. In the present article we will adopt a Bayesian approach in which some of the model parameters are shared (the same for all tasks) and others more loosely connected through a joint prior distribution that can be learned from the data. We seek in this way to combine the best parts of both the statistical multilevel approach and the neural network machinery.
Cite
Text
Bakker and Heskes. "Task Clustering and Gating for Bayesian Multitask Learning." Journal of Machine Learning Research, 2003.Markdown
[Bakker and Heskes. "Task Clustering and Gating for Bayesian Multitask Learning." Journal of Machine Learning Research, 2003.](https://mlanthology.org/jmlr/2003/bakker2003jmlr-task/)BibTeX
@article{bakker2003jmlr-task,
title = {{Task Clustering and Gating for Bayesian Multitask Learning}},
author = {Bakker, Bart and Heskes, Tom},
journal = {Journal of Machine Learning Research},
year = {2003},
pages = {83-99},
volume = {4},
url = {https://mlanthology.org/jmlr/2003/bakker2003jmlr-task/}
}