Unraveling the Impact of Heterophilic Structures on Graph Positive-Unlabeled Learning
Abstract
While Positive-Unlabeled (PU) learning is vital in many real-world scenarios, its application to graph data still remains under-explored. We unveil that a critical challenge for PU learning on graph lies on the edge heterophily, which directly violates the $\textit{irreducibility assumption}$ for $\textit{Class-Prior Estimation}$ (class prior is essential for building PU learning algorithms) and degenerates the latent label inference on unlabeled nodes during classifier training. In response to this challenge, we introduce a new method, named $\textit{$\underline{G}$raph $\underline{P}$U Learning with $\underline{L}$abel Propagation Loss}$ (GPL). Specifically, GPL considers learning from PU nodes along with an intermediate heterophily reduction, which helps mitigate the negative impact of the heterophilic structure. We formulate this procedure as a bilevel optimization that reduces heterophily in the inner loop and efficiently learns a classifier in the outer loop. Extensive experiments across a variety of datasets have shown that GPL significantly outperforms baseline methods, confirming its effectiveness and superiority.
Cite
Text
Wu et al. "Unraveling the Impact of Heterophilic Structures on Graph Positive-Unlabeled Learning." International Conference on Machine Learning, 2024.Markdown
[Wu et al. "Unraveling the Impact of Heterophilic Structures on Graph Positive-Unlabeled Learning." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/wu2024icml-unraveling/)BibTeX
@inproceedings{wu2024icml-unraveling,
title = {{Unraveling the Impact of Heterophilic Structures on Graph Positive-Unlabeled Learning}},
author = {Wu, Yuhao and Yao, Jiangchao and Han, Bo and Yao, Lina and Liu, Tongliang},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {53928-53943},
volume = {235},
url = {https://mlanthology.org/icml/2024/wu2024icml-unraveling/}
}