An Information-Theoretic Multi-Task Representation Learning Framework for Natural Language Understanding
Abstract
This paper proposes a new principled multi-task representation learning framework (InfoMTL) to extract noise-invariant sufficient representations for all tasks. It ensures sufficiency of shared representations for all tasks and mitigates the negative effect of redundant features, which can enhance language understanding of pre-trained language models (PLMs) under the multi-task paradigm. Firstly, a shared information maximization principle is proposed to learn more sufficient shared representations for all target tasks. It can avoid the insufficiency issue arising from representation compression in the multi-task paradigm. Secondly, a task-specific information minimization principle is designed to mitigate the negative effect of potential redundant features in the input for each task. It can compress task-irrelevant redundant information and preserve necessary information relevant to the target for multi-task prediction. Experiments on six classification benchmarks show that our method outperforms 12 comparative multi-task methods under the same multi-task settings, especially in data-constrained and noisy scenarios. Extensive experiments demonstrate that the learned representations are more sufficient, data-efficient, and robust.
Cite
Text
Hu et al. "An Information-Theoretic Multi-Task Representation Learning Framework for Natural Language Understanding." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I16.33899Markdown
[Hu et al. "An Information-Theoretic Multi-Task Representation Learning Framework for Natural Language Understanding." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/hu2025aaai-information/) doi:10.1609/AAAI.V39I16.33899BibTeX
@inproceedings{hu2025aaai-information,
title = {{An Information-Theoretic Multi-Task Representation Learning Framework for Natural Language Understanding}},
author = {Hu, Dou and Wei, Lingwei and Zhou, Wei and Hu, Songlin},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {17276-17286},
doi = {10.1609/AAAI.V39I16.33899},
url = {https://mlanthology.org/aaai/2025/hu2025aaai-information/}
}