Commute Your Domains: Trajectory Optimality Criterion for Multi-Domain Learning

Abstract

In multi-domain learning, a single model is trained on diverse data domains to leverage shared knowledge and improve generalization. The order in which the data from these domains is used for training can significantly affect the model's performance on each domain. However, this dependence is under-studied. In this paper, we investigate the influence of training order (or data mixing) in multi-domain learning using the concept of Lie bracket of gradient vector fields. By analyzing the infinitesimal effects of changing the training order, we identify regions in the parameter space where altering the order between two training domains can benefit the target loss. We validate the predictions of our theoretical framework on the influence of training order (or data mixing) both on a toy example and bilingual LLM pre-training.

Cite

Text

Rukhovich et al. "Commute Your Domains: Trajectory Optimality Criterion for Multi-Domain Learning." NeurIPS 2024 Workshops: M3L, 2024.

Markdown

[Rukhovich et al. "Commute Your Domains: Trajectory Optimality Criterion for Multi-Domain Learning." NeurIPS 2024 Workshops: M3L, 2024.](https://mlanthology.org/neuripsw/2024/rukhovich2024neuripsw-commute/)

BibTeX

@inproceedings{rukhovich2024neuripsw-commute,
  title     = {{Commute Your Domains: Trajectory Optimality Criterion for Multi-Domain Learning}},
  author    = {Rukhovich, Alexey and Podolskiy, Alexander and Piontkovskaya, Irina},
  booktitle = {NeurIPS 2024 Workshops: M3L},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/rukhovich2024neuripsw-commute/}
}