AdapterGNN: Parameter-Efficient Fine-Tuning Improves Generalization in GNNs
Abstract
Fine-tuning pre-trained models has recently yielded remarkable performance gains in graph neural networks (GNNs). In addition to pre-training techniques, inspired by the latest work in the natural language fields, more recent work has shifted towards applying effective fine-tuning approaches, such as parameter-efficient fine-tuning (PEFT). However, given the substantial differences between GNNs and transformer-based models, applying such approaches directly to GNNs proved to be less effective. In this paper, we present a comprehensive comparison of PEFT techniques for GNNs and propose a novel PEFT method specifically designed for GNNs, called AdapterGNN. AdapterGNN preserves the knowledge of the large pre-trained model and leverages highly expressive adapters for GNNs, which can adapt to downstream tasks effectively with only a few parameters, while also improving the model's generalization ability. Extensive experiments show that AdapterGNN achieves higher performance than other PEFT methods and is the only one consistently surpassing full fine-tuning (outperforming it by 1.6% and 5.7% in the chemistry and biology domains respectively, with only 5% and 4% of its parameters tuned) with lower generalization gaps. Moreover, we empirically show that a larger GNN model can have a worse generalization ability, which differs from the trend observed in large transformer-based models. Building upon this, we provide a theoretical justification for PEFT can improve generalization of GNNs by applying generalization bounds. Our code is available at https://github.com/Lucius-lsr/AdapterGNN.
Cite
Text
Li et al. "AdapterGNN: Parameter-Efficient Fine-Tuning Improves Generalization in GNNs." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I12.29264Markdown
[Li et al. "AdapterGNN: Parameter-Efficient Fine-Tuning Improves Generalization in GNNs." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/li2024aaai-adaptergnn/) doi:10.1609/AAAI.V38I12.29264BibTeX
@inproceedings{li2024aaai-adaptergnn,
title = {{AdapterGNN: Parameter-Efficient Fine-Tuning Improves Generalization in GNNs}},
author = {Li, Shengrui and Han, Xueting and Bai, Jing},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {13600-13608},
doi = {10.1609/AAAI.V38I12.29264},
url = {https://mlanthology.org/aaai/2024/li2024aaai-adaptergnn/}
}