Deep Multi-Task Learning with Adversarial-and-Cooperative Nets
Abstract
In this paper, we propose a deep multi-Task learning model based on Adversarial-and-COoperative nets (TACO). The goal is to use an adversarial-and-cooperative strategy to decouple the task-common and task-specific knowledge, facilitating the fine-grained knowledge sharing among tasks. TACO accommodates multiple game players, i.e., feature extractors, domain discriminator, and tri-classifiers. They play the MinMax games adversarially and cooperatively to distill the task-common and task-specific features, while respecting their discriminative structures. Moreover, it adopts a divide-and-combine strategy to leverage the decoupled multi-view information to further improve the generalization performance of the model. The experimental results show that our proposed method significantly outperforms the state-of-the-art algorithms on the benchmark datasets in both multi-task learning and semi-supervised domain adaptation scenarios.
Cite
Text
Yang et al. "Deep Multi-Task Learning with Adversarial-and-Cooperative Nets." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/566Markdown
[Yang et al. "Deep Multi-Task Learning with Adversarial-and-Cooperative Nets." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/yang2019ijcai-deep/) doi:10.24963/IJCAI.2019/566BibTeX
@inproceedings{yang2019ijcai-deep,
title = {{Deep Multi-Task Learning with Adversarial-and-Cooperative Nets}},
author = {Yang, Pei and Tan, Qi and Ye, Jieping and Tong, Hanghang and He, Jingrui},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2019},
pages = {4078-4084},
doi = {10.24963/IJCAI.2019/566},
url = {https://mlanthology.org/ijcai/2019/yang2019ijcai-deep/}
}