Bounds for Linear Multi-Task Learning
Abstract
We give dimension-free and data-dependent bounds for linear multi-task learning where a common linear operator is chosen to preprocess data for a vector of task specific linear-thresholding classifiers. The complexity penalty of multi-task learning is bounded by a simple expression involving the margins of the task-specific classifiers, the Hilbert-Schmidt norm of the selected preprocessor and the Hilbert-Schmidt norm of the covariance operator for the total mixture of all task distributions, or, alternatively, the Frobenius norm of the total Gramian matrix for the data-dependent version. The results can be compared to state-of-the-art results on linear single-task learning.
Cite
Text
Maurer. "Bounds for Linear Multi-Task Learning." Journal of Machine Learning Research, 2006.Markdown
[Maurer. "Bounds for Linear Multi-Task Learning." Journal of Machine Learning Research, 2006.](https://mlanthology.org/jmlr/2006/maurer2006jmlr-bounds/)BibTeX
@article{maurer2006jmlr-bounds,
title = {{Bounds for Linear Multi-Task Learning}},
author = {Maurer, Andreas},
journal = {Journal of Machine Learning Research},
year = {2006},
pages = {117-139},
volume = {7},
url = {https://mlanthology.org/jmlr/2006/maurer2006jmlr-bounds/}
}