Self-Training Based Few-Shot Node Classification by Knowledge Distillation

Abstract

Self-training based few-shot node classification (FSNC) methods have shown excellent performance in real applications, but they cannot make the full use of the information in the base set and are easily affected by the quality of pseudo-labels. To address these issues, this paper proposes a new self-training FSNC method by involving the representation distillation and the pseudo-label distillation. Specifically, the representation distillation includes two knowledge distillation methods (i.e., the local representation distillation and the global representation distillation) to transfer the information in the base set to the novel set. The pseudo-label distillation is designed to conduct knowledge distillation on the pseudo-labels to improve their quality. Experimental results showed that our method achieves supreme performance, compared with state-of-the-art methods. Our code and a comprehensive theoretical version are available at https://github.com/zongqianwu/KD-FSNC.

Cite

Text

Wu et al. "Self-Training Based Few-Shot Node Classification by Knowledge Distillation." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I14.29530

Markdown

[Wu et al. "Self-Training Based Few-Shot Node Classification by Knowledge Distillation." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/wu2024aaai-self/) doi:10.1609/AAAI.V38I14.29530

BibTeX

@inproceedings{wu2024aaai-self,
  title     = {{Self-Training Based Few-Shot Node Classification by Knowledge Distillation}},
  author    = {Wu, Zongqian and Mo, Yujie and Zhou, Peng and Yuan, Shangbo and Zhu, Xiaofeng},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {15988-15995},
  doi       = {10.1609/AAAI.V38I14.29530},
  url       = {https://mlanthology.org/aaai/2024/wu2024aaai-self/}
}