Multi-Task Learning as a Bargaining Game

Abstract

In Multi-task learning (MTL), a joint model is trained to simultaneously make predictions for several tasks. Joint training reduces computation costs and improves data efficiency; however, since the gradients of these different tasks may conflict, training a joint model for MTL often yields lower performance than its corresponding single-task counterparts. A common method for alleviating this issue is to combine per-task gradients into a joint update direction using a particular heuristic. In this paper, we propose viewing the gradients combination step as a bargaining game, where tasks negotiate to reach an agreement on a joint direction of parameter update. Under certain assumptions, the bargaining problem has a unique solution, known as the Nash Bargaining Solution, which we propose to use as a principled approach to multi-task learning. We describe a new MTL optimization procedure, Nash-MTL, and derive theoretical guarantees for its convergence. Empirically, we show that Nash-MTL achieves state-of-the-art results on multiple MTL benchmarks in various domains.

Cite

Text

Navon et al. "Multi-Task Learning as a Bargaining Game." International Conference on Machine Learning, 2022.

Markdown

[Navon et al. "Multi-Task Learning as a Bargaining Game." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/navon2022icml-multitask/)

BibTeX

@inproceedings{navon2022icml-multitask,
  title     = {{Multi-Task Learning as a Bargaining Game}},
  author    = {Navon, Aviv and Shamsian, Aviv and Achituve, Idan and Maron, Haggai and Kawaguchi, Kenji and Chechik, Gal and Fetaya, Ethan},
  booktitle = {International Conference on Machine Learning},
  year      = {2022},
  pages     = {16428-16446},
  volume    = {162},
  url       = {https://mlanthology.org/icml/2022/navon2022icml-multitask/}
}