Self-Paced Multi-Task Learning
Abstract
Multi-task learning is a paradigm, where multiple tasks are jointly learnt. Previous multi-task learning models usually treat all tasks and instances per task equally during learning. Inspired by the fact that humans often learn from easy concepts to hard ones in the cognitive process, in this paper, we propose a novel multi-task learning framework that attempts to learn the tasks by simultaneously taking into consideration the complexities of both tasks and instances per task. We propose a novel formulation by presenting a new task-oriented regularizer that can jointly prioritize tasks and instances.Thus it can be interpreted as a self-paced learner for multi-task learning. An efficient block coordinate descent algorithm is developed to solve the proposed objective function, and the convergence of the algorithm can be guaranteed. Experimental results on the toy and real-world datasets demonstrate the effectiveness of the proposed approach, compared to the state-of-the-arts.
Cite
Text
Li et al. "Self-Paced Multi-Task Learning." AAAI Conference on Artificial Intelligence, 2017. doi:10.1609/AAAI.V31I1.10847Markdown
[Li et al. "Self-Paced Multi-Task Learning." AAAI Conference on Artificial Intelligence, 2017.](https://mlanthology.org/aaai/2017/li2017aaai-self/) doi:10.1609/AAAI.V31I1.10847BibTeX
@inproceedings{li2017aaai-self,
title = {{Self-Paced Multi-Task Learning}},
author = {Li, Changsheng and Yan, Junchi and Wei, Fan and Dong, Weishan and Liu, Qingshan and Zha, Hongyuan},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2017},
pages = {2175-2181},
doi = {10.1609/AAAI.V31I1.10847},
url = {https://mlanthology.org/aaai/2017/li2017aaai-self/}
}