Layer-Diverse Negative Sampling for Graph Neural Networks
Abstract
Graph neural networks (GNNs) are a powerful solution for various structure learning applications due to their strong representation capabilities for graph data. However, traditional GNNs, relying on message-passing mechanisms that gather information exclusively from first-order neighbours (known as positive samples), can lead to issues such as over-smoothing and over-squashing. To mitigate these issues, we propose a layer-diverse negative sampling method for message-passing propagation. This method employs a sampling matrix within a determinantal point process, which transforms the candidate set into a space and selectively samples from this space to generate negative samples. To further enhance the diversity of the negative samples during each forward pass, we develop a space-squeezing method to achieve layer-wise diversity in multi-layer GNNs. Experiments on various real-world graph datasets demonstrate the effectiveness of our approach in improving the diversity of negative samples and overall learning performance. Moreover, adding negative samples dynamically changes the graph's topology, thus with the strong potential to improve the expressiveness of GNNs and reduce the risk of over-squashing.
Cite
Text
Duan et al. "Layer-Diverse Negative Sampling for Graph Neural Networks." Transactions on Machine Learning Research, 2024.Markdown
[Duan et al. "Layer-Diverse Negative Sampling for Graph Neural Networks." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/duan2024tmlr-layerdiverse/)BibTeX
@article{duan2024tmlr-layerdiverse,
title = {{Layer-Diverse Negative Sampling for Graph Neural Networks}},
author = {Duan, Wei and Lu, Jie and Wang, Yu Guang and Xuan, Junyu},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/duan2024tmlr-layerdiverse/}
}