Adaptive Activation Network and Functional Regularization for Efficient and Flexible Deep Multi-Task Learning

Abstract

Multi-task learning (MTL) is a common paradigm that seeks to improve the generalization performance of task learning by training related tasks simultaneously. However, it is still a challenging problem to search the flexible and accurate architecture that can be shared among multiple tasks. In this paper, we propose a novel deep learning model called Task Adaptive Activation Network (TAAN) that can automatically learn the optimal network architecture for MTL. The main principle of TAAN is to derive flexible activation functions for different tasks from the data with other parameters of the network fully shared. We further propose two functional regularization methods that improve the MTL performance of TAAN. The improved performance of both TAAN and the regularization methods is demonstrated by comprehensive experiments.

Cite

Text

Liu et al. "Adaptive Activation Network and Functional Regularization for Efficient and Flexible Deep Multi-Task Learning." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I04.5930

Markdown

[Liu et al. "Adaptive Activation Network and Functional Regularization for Efficient and Flexible Deep Multi-Task Learning." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/liu2020aaai-adaptive-a/) doi:10.1609/AAAI.V34I04.5930

BibTeX

@inproceedings{liu2020aaai-adaptive-a,
  title     = {{Adaptive Activation Network and Functional Regularization for Efficient and Flexible Deep Multi-Task Learning}},
  author    = {Liu, Yingru and Yang, Xuewen and Xie, Dongliang and Wang, Xin and Shen, Li and Huang, Haozhi and Balasubramanian, Niranjan},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {4924-4931},
  doi       = {10.1609/AAAI.V34I04.5930},
  url       = {https://mlanthology.org/aaai/2020/liu2020aaai-adaptive-a/}
}