Loss-Balanced Task Weighting to Reduce Negative Transfer in Multi-Task Learning

Abstract

In settings with related prediction tasks, integrated multi-task learning models can often improve performance relative to independent single-task models. However, even when the average task performance improves, individual tasks may experience negative transfer in which the multi-task model’s predictions are worse than the single-task model’s. We show the prevalence of negative transfer in a computational chemistry case study with 128 tasks and introduce a framework that provides a foundation for reducing negative transfer in multitask models. Our Loss-Balanced Task Weighting approach dynamically updates task weights during model training to control the influence of individual tasks.

Cite

Text

Liu et al. "Loss-Balanced Task Weighting to Reduce Negative Transfer in Multi-Task Learning." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33019977

Markdown

[Liu et al. "Loss-Balanced Task Weighting to Reduce Negative Transfer in Multi-Task Learning." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/liu2019aaai-loss/) doi:10.1609/AAAI.V33I01.33019977

BibTeX

@inproceedings{liu2019aaai-loss,
  title     = {{Loss-Balanced Task Weighting to Reduce Negative Transfer in Multi-Task Learning}},
  author    = {Liu, Shengchao and Liang, Yingyu and Gitter, Anthony},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {9977-9978},
  doi       = {10.1609/AAAI.V33I01.33019977},
  url       = {https://mlanthology.org/aaai/2019/liu2019aaai-loss/}
}