Multi-Task Learning with User Preferences: Gradient Descent with Controlled Ascent in Pareto Optimization

Abstract

Multi-Task Learning (MTL) is a well established paradigm for jointly learning models for multiple correlated tasks. Often the tasks conflict, requiring trade-offs between them during optimization. In such cases, multi-objective optimization based MTL methods can be used to find one or more Pareto optimal solutions. A common requirement in MTL applications, that cannot be addressed by these methods, is to find a solution satisfying userspecified preferences with respect to task-specific losses. We advance the state-of-the-art by developing the first gradient-based multi-objective MTL algorithm to solve this problem. Our unique approach combines multiple gradient descent with carefully controlled ascent to traverse the Pareto front in a principled manner, which also makes it robust to initialization. The scalability of our algorithm enables its use in large-scale deep networks for MTL. Assuming only differentiability of the task-specific loss functions, we provide theoretical guarantees for convergence. Our experiments show that our algorithm outperforms the best competing methods on benchmark datasets.

Cite

Text

Mahapatra and Rajan. "Multi-Task Learning with User Preferences: Gradient Descent with Controlled Ascent in Pareto Optimization." International Conference on Machine Learning, 2020.

Markdown

[Mahapatra and Rajan. "Multi-Task Learning with User Preferences: Gradient Descent with Controlled Ascent in Pareto Optimization." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/mahapatra2020icml-multitask/)

BibTeX

@inproceedings{mahapatra2020icml-multitask,
  title     = {{Multi-Task Learning with User Preferences: Gradient Descent with Controlled Ascent in Pareto Optimization}},
  author    = {Mahapatra, Debabrata and Rajan, Vaibhav},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {6597-6607},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/mahapatra2020icml-multitask/}
}