Scalarization for Multi-Task and Multi-Domain Learning at Scale

Abstract

Training a single model on multiple input domains and/or output tasks allows for compressing information from multiple sources into a unified backbone hence improves model efficiency. It also enables potential positive knowledge transfer across tasks/domains, leading to improved accuracy and data-efficient training. However, optimizing such networks is a challenge, in particular due to discrepancies between the different tasks or domains: Despite several hypotheses and solutions proposed over the years, recent work has shown that uniform scalarization training, i.e., simply minimizing the average of the task losses, yields on-par performance with more costly SotA optimization methods. This raises the issue of how well we understand the training dynamics of multi-task and multi-domain networks. In this work, we first devise a large-scale unified analysis of multi-domain and multi-task learning to better understand the dynamics of scalarization across varied task/domain combinations and model sizes. Following these insights, we then propose to leverage population-based training to efficiently search for the optimal scalarization weights when dealing with a large number of tasks or domains.

Cite

Text

Royer et al. "Scalarization for Multi-Task and Multi-Domain Learning at Scale." Neural Information Processing Systems, 2023.

Markdown

[Royer et al. "Scalarization for Multi-Task and Multi-Domain Learning at Scale." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/royer2023neurips-scalarization/)

BibTeX

@inproceedings{royer2023neurips-scalarization,
  title     = {{Scalarization for Multi-Task and Multi-Domain Learning at Scale}},
  author    = {Royer, Amelie and Blankevoort, Tijmen and Bejnordi, Babak Ehteshami},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/royer2023neurips-scalarization/}
}