Evaluating Graph Neural Networks for Link Prediction: Current Pitfalls and New Benchmarking

Abstract

Link prediction attempts to predict whether an unseen edge exists based on only a portion of the graph. A flurry of methods has been created in recent years that attempt to make use of graph neural networks (GNNs) for this task. Furthermore, new and diverse datasets have also been created to better evaluate the effectiveness of these new models. However, multiple limitations currently exist that hinders our ability to properly evaluate these new methods. This includes, but is not limited to: (1) The underreporting of performance on multiple baselines, (2) A lack of a unified data split and evaluation metric on some datasets, (3) An unrealistic evaluation setting that produces negative samples that are easy to classify. To overcome these challenges we first conduct a fair comparison across prominent methods and datasets, utilizing the same dataset settings and hyperparameter settings. We then create a new real-world evaluation setting that samples difficult negative samples via multiple heuristics. The new evaluation setting helps promote new challenges and opportunities in link prediction by aligning the evaluation with real-world situations.

Cite

Text

Li et al. "Evaluating Graph Neural Networks for Link Prediction: Current Pitfalls and New Benchmarking." Neural Information Processing Systems, 2023.

Markdown

[Li et al. "Evaluating Graph Neural Networks for Link Prediction: Current Pitfalls and New Benchmarking." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/li2023neurips-evaluating/)

BibTeX

@inproceedings{li2023neurips-evaluating,
  title     = {{Evaluating Graph Neural Networks for Link Prediction: Current Pitfalls and New Benchmarking}},
  author    = {Li, Juanhui and Shomer, Harry and Mao, Haitao and Zeng, Shenglai and Ma, Yao and Shah, Neil and Tang, Jiliang and Yin, Dawei},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/li2023neurips-evaluating/}
}