Adaptive Adversarial Multi-Task Representation Learning
Abstract
Adversarial Multi-task Representation Learning (AMTRL) methods are able to boost the performance of Multi-task Representation Learning (MTRL) models. However, the theoretical mechanism behind AMTRL is less investigated. To fill this gap, we study the generalization error bound of AMTRL through the lens of Lagrangian duality . Based on the duality, we proposed an novel adaptive AMTRL algorithm which improves the performance of original AMTRL methods. The extensive experiments back up our theoretical analysis and validate the superiority of our proposed algorithm.
Cite
Text
Mao et al. "Adaptive Adversarial Multi-Task Representation Learning." International Conference on Machine Learning, 2020.Markdown
[Mao et al. "Adaptive Adversarial Multi-Task Representation Learning." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/mao2020icml-adaptive/)BibTeX
@inproceedings{mao2020icml-adaptive,
title = {{Adaptive Adversarial Multi-Task Representation Learning}},
author = {Mao, Yuren and Liu, Weiwei and Lin, Xuemin},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {6724-6733},
volume = {119},
url = {https://mlanthology.org/icml/2020/mao2020icml-adaptive/}
}