How Much Can Transfer? BRIDGE: Bounded Multi-Domain Graph Foundation Model with Generalization Guarantees

Abstract

Graph Foundation Models hold significant potential for advancing multi-domain graph learning, yet their full capabilities remain largely untapped. Existing works show promising task performance with the “pretrain-then-prompt” paradigm, which lacks theoretical foundations to understand why it works and how much knowledge can be transferred from source domains to the target. In this paper, we introduce BRIDGE, a bounded graph foundation model pre-trained on multi-domains with Generalization guarantees. To learn discriminative source knowledge, we align multi-domain graph features with domain-invariant aligners during pre-training. Then, a lightweight Mixture of Experts (MoE) network is proposed to facilitate downstream prompting through self-supervised selective knowledge assembly and transfer. Further, to determine the maximum amount of transferable knowledge, we derive an optimizable generalization error upper bound from a graph spectral perspective given the Lipschitz continuity. Extensive experiments demonstrate the superiority of BRIDGE on both node and graph classification compared with 15 state-of-the-art baselines.

Cite

Text

Yuan et al. "How Much Can Transfer? BRIDGE: Bounded Multi-Domain Graph Foundation Model with Generalization Guarantees." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Yuan et al. "How Much Can Transfer? BRIDGE: Bounded Multi-Domain Graph Foundation Model with Generalization Guarantees." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/yuan2025icml-much/)

BibTeX

@inproceedings{yuan2025icml-much,
  title     = {{How Much Can Transfer? BRIDGE: Bounded Multi-Domain Graph Foundation Model with Generalization Guarantees}},
  author    = {Yuan, Haonan and Sun, Qingyun and Shi, Junhua and Fu, Xingcheng and Hooi, Bryan and Li, Jianxin and Yu, Philip S.},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {73604-73644},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/yuan2025icml-much/}
}