Efficient Pareto Manifold Learning with Low-Rank Structure
Abstract
Multi-task learning, which optimizes performance across multiple tasks, is inherently a multi-objective optimization problem. Various algorithms are developed to provide discrete trade-off solutions on the Pareto front. Recently, continuous Pareto front approximations using a linear combination of base networks have emerged as a compelling strategy. However, it suffers from scalability issues when the number of tasks is large. To address this issue, we propose a novel approach that integrates a main network with several low-rank matrices to efficiently learn the Pareto manifold. It significantly reduces the number of parameters and facilitates the extraction of shared features. We also introduce orthogonal regularization to further bolster performance. Extensive experimental results demonstrate that the proposed approach outperforms state-of-the-art baselines, especially on datasets with a large number of tasks.
Cite
Text
Chen and Kwok. "Efficient Pareto Manifold Learning with Low-Rank Structure." International Conference on Machine Learning, 2024.Markdown
[Chen and Kwok. "Efficient Pareto Manifold Learning with Low-Rank Structure." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/chen2024icml-efficient/)BibTeX
@inproceedings{chen2024icml-efficient,
title = {{Efficient Pareto Manifold Learning with Low-Rank Structure}},
author = {Chen, Weiyu and Kwok, James},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {7015-7032},
volume = {235},
url = {https://mlanthology.org/icml/2024/chen2024icml-efficient/}
}