Adaptive Smoothed Online Multi-Task Learning

Abstract

This paper addresses the challenge of jointly learning both the per-task model parameters and the inter-task relationships in a multi-task online learning setting. The proposed algorithm features probabilistic interpretation, efficient updating rules and flexible modulation on whether learners focus on their specific task or on jointly address all tasks. The paper also proves a sub-linear regret bound as compared to the best linear predictor in hindsight. Experiments over three multi-task learning benchmark datasets show advantageous performance of the proposed approach over several state-of-the-art online multi-task learning baselines.

Cite

Text

Murugesan et al. "Adaptive Smoothed Online Multi-Task Learning." Neural Information Processing Systems, 2016.

Markdown

[Murugesan et al. "Adaptive Smoothed Online Multi-Task Learning." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/murugesan2016neurips-adaptive/)

BibTeX

@inproceedings{murugesan2016neurips-adaptive,
  title     = {{Adaptive Smoothed Online Multi-Task Learning}},
  author    = {Murugesan, Keerthiram and Liu, Hanxiao and Carbonell, Jaime and Yang, Yiming},
  booktitle = {Neural Information Processing Systems},
  year      = {2016},
  pages     = {4296-4304},
  url       = {https://mlanthology.org/neurips/2016/murugesan2016neurips-adaptive/}
}