Continual Optimization with Symmetry Teleportation for Multi-Task Learning
Abstract
Multi-task learning (MTL) is a widely explored paradigm that enables the simultaneous learning of multiple tasks using a single model. Despite numerous solutions, the key issues of optimization conflict and task imbalance remain under-addressed, limiting performance. Unlike existing optimization-based approaches that typically reweight task losses or gradients to mitigate conflicts or promote progress, we propose a novel approach based on Continual Optimization with Symmetry Teleportation (COST). During MTL optimization, when an optimization conflict arises, we seek an alternative loss-equivalent point on the loss landscape to reduce conflict. Specifically, we utilize a low-rank adapter (LoRA) to facilitate this practical teleportation by designing convergent, loss-invariant objectives. Additionally, we introduce a historical trajectory reuse strategy to continually leverage the benefits of advanced optimizers. Extensive experiments on multiple mainstream datasets demonstrate the effectiveness of our approach. COST is a plug-and-play solution that enhances a wide range of existing MTL methods. When integrated with state-of-the-art methods, COST achieves superior performance.
Cite
Text
Zhou et al. "Continual Optimization with Symmetry Teleportation for Multi-Task Learning." Advances in Neural Information Processing Systems, 2025.Markdown
[Zhou et al. "Continual Optimization with Symmetry Teleportation for Multi-Task Learning." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/zhou2025neurips-continual/)BibTeX
@inproceedings{zhou2025neurips-continual,
title = {{Continual Optimization with Symmetry Teleportation for Multi-Task Learning}},
author = {Zhou, Zhipeng and Meng, Ziqiao and Wu, Pengcheng and Zhao, Peilin and Miao, Chunyan},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/zhou2025neurips-continual/}
}