RIM: Reliable Influence-Based Active Learning on Graphs
Abstract
Message passing is the core of most graph models such as Graph Convolutional Network (GCN) and Label Propagation (LP), which usually require a large number of clean labeled data to smooth out the neighborhood over the graph. However, the labeling process can be tedious, costly, and error-prone in practice. In this paper, we propose to unify active learning (AL) and message passing towards minimizing labeling costs, e.g., making use of few and unreliable labels that can be obtained cheaply. We make two contributions towards that end. First, we open up a perspective by drawing a connection between AL enforcing message passing and social influence maximization, ensuring that the selected samples effectively improve the model performance. Second, we propose an extension to the influence model that incorporates an explicit quality factor to model label noise. In this way, we derive a fundamentally new AL selection criterion for GCN and LP--reliable influence maximization (RIM)--by considering quantity and quality of influence simultaneously. Empirical studies on public datasets show that RIM significantly outperforms current AL methods in terms of accuracy and efficiency.
Cite
Text
Zhang et al. "RIM: Reliable Influence-Based Active Learning on Graphs." Neural Information Processing Systems, 2021.Markdown
[Zhang et al. "RIM: Reliable Influence-Based Active Learning on Graphs." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/zhang2021neurips-rim/)BibTeX
@inproceedings{zhang2021neurips-rim,
title = {{RIM: Reliable Influence-Based Active Learning on Graphs}},
author = {Zhang, Wentao and Wang, Yexin and You, Zhenbang and Cao, Meng and Huang, Ping and Shan, Jiulong and Yang, Zhi and Cui, Bin},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/zhang2021neurips-rim/}
}