On the Power of Multitask Representation Learning with Gradient Descent

Abstract

Representation learning, particularly multi-task representation learning, has gained widespread popularity in various deep learning applications, ranging from computer vision to natural language processing, due to its remarkable generalization performance. Despite its growing use, our understanding of the underlying mechanisms remains limited. In this paper, we provide a theoretical analysis elucidating why multi-task representation learning outperforms its single-task counterpart in scenarios involving over-parameterized two-layer convolutional neural networks trained by gradient descent. Our analysis is based on a data model that encompasses both task-shared and task-specific features, a setting commonly encountered in real-world applications. We also present experiments on synthetic and real-world data to illustrate and validate our theoretical findings.

Cite

Text

Li et al. "On the Power of Multitask Representation Learning with Gradient Descent." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.

Markdown

[Li et al. "On the Power of Multitask Representation Learning with Gradient Descent." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.](https://mlanthology.org/aistats/2025/li2025aistats-power/)

BibTeX

@inproceedings{li2025aistats-power,
  title     = {{On the Power of Multitask Representation Learning with Gradient Descent}},
  author    = {Li, Qiaobo and Chen, Zixiang and Deng, Yihe and Kou, Yiwen and Cao, Yuan and Gu, Quanquan},
  booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics},
  year      = {2025},
  pages     = {4357-4365},
  volume    = {258},
  url       = {https://mlanthology.org/aistats/2025/li2025aistats-power/}
}