Towards Foundation Models on Graphs: An Analysis on Cross-Dataset Transfer of Pretrained GNNs

Abstract

To develop a preliminary understanding towards Graph Foundation Models, we study the extent to which pretrained Graph Neural Networks can be applied across datasets, an effort requiring to be agnostic to dataset-specific features and their encodings. We build upon a purely structural pretraining approach and propose an extension to capture feature information while still being feature-agnostic. We evaluate pretrained models on downstream tasks for varying amounts of training samples and choices of pretraining datasets. Our preliminary results indicate that embeddings from pretrained models improve generalization only with enough downstream data points and in a degree which depends on the quantity and properties of pretraining data. Feature information can lead to improvements, but currently requires some similarities between pretraining and downstream feature spaces.

Cite

Text

Frasca et al. "Towards Foundation Models on Graphs: An Analysis on Cross-Dataset Transfer of Pretrained GNNs." NeurIPS 2024 Workshops: NeurReps, 2024.

Markdown

[Frasca et al. "Towards Foundation Models on Graphs: An Analysis on Cross-Dataset Transfer of Pretrained GNNs." NeurIPS 2024 Workshops: NeurReps, 2024.](https://mlanthology.org/neuripsw/2024/frasca2024neuripsw-foundation/)

BibTeX

@inproceedings{frasca2024neuripsw-foundation,
  title     = {{Towards Foundation Models on Graphs: An Analysis on Cross-Dataset Transfer of Pretrained GNNs}},
  author    = {Frasca, Fabrizio and Jogl, Fabian and Eliasof, Moshe and Ostrovsky, Matan and Schönlieb, Carola-Bibiane and Gärtner, Thomas and Maron, Haggai},
  booktitle = {NeurIPS 2024 Workshops: NeurReps},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/frasca2024neuripsw-foundation/}
}