Multitask Learning via Shared Features: Algorithms and Hardness
Abstract
We investigate the computational efficiency of multitask learning of Boolean functions over the $d$-dimensional hypercube, that are related by means of a feature representation of size $k\ll d$ shared across all tasks. We present a polynomial time multitask learning algorithm for the concept class of halfspaces with margin $\gamma$, which is based on a simultaneous boosting technique and requires only $\mathrm{poly}(k/\gamma)$ samples-per-task and $\mathrm{poly}(k\log(d)/\gamma)$ samples in total. In addition, we prove a computational separation, showing that assuming there exists a concept class that cannot be learned in the attribute-efficient model, we can construct another concept class such that can be learned in the attribute-efficient model, but cannot be multitask learned efficiently—multitask learning this concept class either requires super-polynomial time complexity or a much larger total number of samples.
Cite
Text
Bairaktari et al. "Multitask Learning via Shared Features: Algorithms and Hardness." Conference on Learning Theory, 2023.Markdown
[Bairaktari et al. "Multitask Learning via Shared Features: Algorithms and Hardness." Conference on Learning Theory, 2023.](https://mlanthology.org/colt/2023/bairaktari2023colt-multitask/)BibTeX
@inproceedings{bairaktari2023colt-multitask,
title = {{Multitask Learning via Shared Features: Algorithms and Hardness}},
author = {Bairaktari, Konstantina and Blanc, Guy and Tan, Li-Yang and Ullman, Jonathan and Zakynthinou, Lydia},
booktitle = {Conference on Learning Theory},
year = {2023},
pages = {747-772},
volume = {195},
url = {https://mlanthology.org/colt/2023/bairaktari2023colt-multitask/}
}