Co-Regularization Enhances Knowledge Transfer in High Dimensions

Abstract

Most existing transfer learning algorithms for high-dimensional models employ a two-step regularization framework, whose success heavily hinges on the assumption that the pre-trained model closely resembles the target. To relax this assumption, we propose a co-regularization process to directly exploit beneficial knowledge from the source domain for high-dimensional generalized linear models. The proposed method learns the target parameter by constraining the source parameters to be close to the target one, thereby preventing fine-tuning failures caused by significantly deviated pre-trained parameters. Our theoretical analysis demonstrates that the proposed method accommodates a broader range of sources than existing two-step frameworks, thus being more robust to less similar sources. Its effectiveness is validated through extensive empirical studies.

Cite

Text

Liu et al. "Co-Regularization Enhances Knowledge Transfer in High Dimensions." Advances in Neural Information Processing Systems, 2025.

Markdown

[Liu et al. "Co-Regularization Enhances Knowledge Transfer in High Dimensions." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/liu2025neurips-coregularization/)

BibTeX

@inproceedings{liu2025neurips-coregularization,
  title     = {{Co-Regularization Enhances Knowledge Transfer in High Dimensions}},
  author    = {Liu, Shuo Shuo and Lin, Haotian and Reimherr, Matthew and Li, Runze},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/liu2025neurips-coregularization/}
}