BANGS: Game-Theoretic Node Selection for Graph Self-Training
Abstract
Graph self-training is a semi-supervised learning method that iteratively selects a set of unlabeled data to retrain the underlying graph neural network (GNN) model and improve its prediction performance. While selecting highly confident nodes has proven effective for self-training, this pseudo-labeling strategy ignores the combinatorial dependencies between nodes and suffers from a local view of the distribution. To overcome these issues, we propose BANGS, a novel framework that unifies the labeling strategy with conditional mutual information as the objective of node selection. Our approach---grounded in game theory---selects nodes in a combinatorial fashion and provides theoretical guarantees for robustness under noisy objective. More specifically, unlike traditional methods that rank and select nodes independently, BANGS considers nodes as a collective set in the self-training process. Our method demonstrates superior performance and robustness across various datasets, base models, and hyperparameter settings, outperforming existing techniques. The codebase is available on https://github.com/fangxin-wang/BANGS.
Cite
Text
Wang et al. "BANGS: Game-Theoretic Node Selection for Graph Self-Training." International Conference on Learning Representations, 2025.Markdown
[Wang et al. "BANGS: Game-Theoretic Node Selection for Graph Self-Training." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/wang2025iclr-bangs/)BibTeX
@inproceedings{wang2025iclr-bangs,
title = {{BANGS: Game-Theoretic Node Selection for Graph Self-Training}},
author = {Wang, Fangxin and Liu, Kay and Medya, Sourav and Yu, Philip S.},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/wang2025iclr-bangs/}
}