Graph Low-Rank Adapters of High Regularity for Graph Neural Networks and Graph Transformers
Abstract
We introduce a new low-rank graph adapter, GConv-Adapter, that leverages a two-fold normalized graph convolution and trainable low-rank weight matrices to achieve state-of-the-art (SOTA) and near-SOTA performance in GNN fine-tuning for standard message-passing neural networks (MPNNs) and graph transformers (GTs) in both inductive and transductive learning. We motivate our design by deriving an upper bound on the adapter's Lipschitz constant for $\delta$-regular random (expander) graphs, and we compare it against previous methods which we show to be unbounded.
Cite
Text
Papageorgiou et al. "Graph Low-Rank Adapters of High Regularity for Graph Neural Networks and Graph Transformers." ICLR 2025 Workshops: SCOPE, 2025.Markdown
[Papageorgiou et al. "Graph Low-Rank Adapters of High Regularity for Graph Neural Networks and Graph Transformers." ICLR 2025 Workshops: SCOPE, 2025.](https://mlanthology.org/iclrw/2025/papageorgiou2025iclrw-graph/)BibTeX
@inproceedings{papageorgiou2025iclrw-graph,
title = {{Graph Low-Rank Adapters of High Regularity for Graph Neural Networks and Graph Transformers}},
author = {Papageorgiou, Pantelis and de Ocáriz Borde, Haitz Sáez and Kratsios, Anastasis and Bronstein, Michael M.},
booktitle = {ICLR 2025 Workshops: SCOPE},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/papageorgiou2025iclrw-graph/}
}