Graph Meta Learning via Local Subgraphs
Abstract
Prevailing methods for graphs require abundant label and edge information for learning. When data for a new task are scarce, meta-learning can learn from prior experiences and form much-needed inductive biases for fast adaption to new tasks. Here, we introduce G-Meta, a novel meta-learning algorithm for graphs. G-Meta uses local subgraphs to transfer subgraph-specific information and learn transferable knowledge faster via meta gradients. G-Meta learns how to quickly adapt to a new task using only a handful of nodes or edges in the new task and does so by learning from data points in other graphs or related, albeit disjoint label sets. G-Meta is theoretically justified as we show that the evidence for a prediction can be found in the local subgraph surrounding the target node or edge. Experiments on seven datasets and nine baseline methods show that G-Meta outperforms existing methods by up to 16.3%. Unlike previous methods, G-Meta successfully learns in challenging, few-shot learning settings that require generalization to completely new graphs and never-before-seen labels. Finally, G-Meta scales to large graphs, which we demonstrate on a new Tree-of-Life dataset comprising of 1,840 graphs, a two-orders of magnitude increase in the number of graphs used in prior work.
Cite
Text
Huang and Zitnik. "Graph Meta Learning via Local Subgraphs." Neural Information Processing Systems, 2020.Markdown
[Huang and Zitnik. "Graph Meta Learning via Local Subgraphs." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/huang2020neurips-graph/)BibTeX
@inproceedings{huang2020neurips-graph,
title = {{Graph Meta Learning via Local Subgraphs}},
author = {Huang, Kexin and Zitnik, Marinka},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/huang2020neurips-graph/}
}