Optimal Task Order for Continual Learning of Multiple Tasks

Abstract

Continual learning of multiple tasks remains a major challenge for neural networks. Here, we investigate how task order influences continual learning and propose a strategy for optimizing it. Leveraging a linear teacher-student model with latent factors, we derive an analytical expression relating task similarity and ordering to learning performance. Our analysis reveals two principles that hold under a wide parameter range: (1) tasks should be arranged from the least representative to the most typical, and (2) adjacent tasks should be dissimilar. We validate these rules on both synthetic data and real-world image classification datasets (Fashion-MNIST, CIFAR-10, CIFAR-100), demonstrating consistent performance improvements in both multilayer perceptrons and convolutional neural networks. Our work thus presents a generalizable framework for task-order optimization in task-incremental continual learning.

Cite

Text

Li and Hiratani. "Optimal Task Order for Continual Learning of Multiple Tasks." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Li and Hiratani. "Optimal Task Order for Continual Learning of Multiple Tasks." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/li2025icml-optimal-a/)

BibTeX

@inproceedings{li2025icml-optimal-a,
  title     = {{Optimal Task Order for Continual Learning of Multiple Tasks}},
  author    = {Li, Ziyan and Hiratani, Naoki},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {34578-34603},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/li2025icml-optimal-a/}
}