Distributed Multi-Task Learning
Abstract
We consider the problem of distributed multi-task learning, where each machine learns a separate, but related, task. Specifically, each machine learns a linear predictor in high-dimensional space, where all tasks share the same small support. We present a communication-efficient estimator based on the debiased lasso and show that it is comparable with the optimal centralized method.
Cite
Text
Wang et al. "Distributed Multi-Task Learning." International Conference on Artificial Intelligence and Statistics, 2016.Markdown
[Wang et al. "Distributed Multi-Task Learning." International Conference on Artificial Intelligence and Statistics, 2016.](https://mlanthology.org/aistats/2016/wang2016aistats-distributed/)BibTeX
@inproceedings{wang2016aistats-distributed,
title = {{Distributed Multi-Task Learning}},
author = {Wang, Jialei and Kolar, Mladen and Srebro, Nathan},
booktitle = {International Conference on Artificial Intelligence and Statistics},
year = {2016},
pages = {751-760},
url = {https://mlanthology.org/aistats/2016/wang2016aistats-distributed/}
}