Excess Risk Bounds for Multitask Learning with Trace Norm Regularization
Abstract
Trace norm regularization is a popular method of multitask learning. We give excess risk bounds with explicit dependence on the number of tasks, the number of examples per task and properties of the data distribution. The bounds are independent of the dimension of the input space, which may be infinite as in the case of reproducing kernel Hilbert spaces. A byproduct of the proof are bounds on the expected norm of sums of random positive semidefinite matrices with subexponential moments.
Cite
Text
Pontil and Maurer. "Excess Risk Bounds for Multitask Learning with Trace Norm Regularization." Annual Conference on Computational Learning Theory, 2013.Markdown
[Pontil and Maurer. "Excess Risk Bounds for Multitask Learning with Trace Norm Regularization." Annual Conference on Computational Learning Theory, 2013.](https://mlanthology.org/colt/2013/pontil2013colt-excess/)BibTeX
@inproceedings{pontil2013colt-excess,
title = {{Excess Risk Bounds for Multitask Learning with Trace Norm Regularization}},
author = {Pontil, Massimiliano and Maurer, Andreas},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2013},
pages = {55-76},
url = {https://mlanthology.org/colt/2013/pontil2013colt-excess/}
}