Its All Graph to Me: Single-Model Graph Representation Learning on Multiple Domains
Abstract
Graph neural networks (GNNs) have revolutionised the field of graph representation learning and plays a critical role in graph-based research. Recent work explores applying GNNs to pre-training and fine-tuning, where a model is trained on a large dataset and its learnt representations are then transferred to a smaller dataset. However, current work only explore pre-training on a single domain; for example, a model pre-trained on molecular graphs is fine-tuned on other molecular graphs. This leads to poor generalisability of pre-trained models to novel domains and tasks. In this work, we curate a multi-graph-domain dataset and apply state-of-the-art Graph Adversarial Contrastive Learning (GACL) methods. We present a pre-trained graph model that may have the capability of acting as a foundational graph model. We will evaluate the efficacy of its learnt representations on various downstream tasks against baseline models pre-trained on single domains. In addition, we aim to compare our model to un-trained and non-transferred models, and show that performance using our foundational model is capable of achieving equal or better than task-specific methodology.
Cite
Text
Davies et al. "Its All Graph to Me: Single-Model Graph Representation Learning on Multiple Domains." NeurIPS 2023 Workshops: GLFrontiers, 2023.Markdown
[Davies et al. "Its All Graph to Me: Single-Model Graph Representation Learning on Multiple Domains." NeurIPS 2023 Workshops: GLFrontiers, 2023.](https://mlanthology.org/neuripsw/2023/davies2023neuripsw-all/)BibTeX
@inproceedings{davies2023neuripsw-all,
title = {{Its All Graph to Me: Single-Model Graph Representation Learning on Multiple Domains}},
author = {Davies, Alex and Green, Riku and Ajmeri, Nirav and Filho, Telmo Silva},
booktitle = {NeurIPS 2023 Workshops: GLFrontiers},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/davies2023neuripsw-all/}
}