Incremental Unsupervised Domain Adaptation on Evolving Graphs

Abstract

Non-stationary data distributions in evolving graphs can create problems for deployed graph neural networks (GNN), such as fraud detection GNNs that can become ineffective when fraudsters alter their patterns. The aim of this study is to investigate how to incrementally adapt graph neural networks to incoming, unlabeled graph data after training and deployment. To achieve this, we propose a new approach called graph contrastive self-training (GCST) that combines contrastive learning and self-training to alleviate performance drop. To evaluate the effectiveness of our approach, we conduct a comprehensive empirical evaluation on four diverse graph datasets, comparing it to domain-invariant feature learning methods and plain self-training methods. Our contribution is three-fold: we formulate and study incremental unsupervised domain adaptation on evolving graphs, present an approach that integrates contrastive learning and self-training, and conduct a comprehensive empirical evaluation of our approach, which demonstrates its stability and superiority over other methods.

Cite

Text

Chung and Ghosh. "Incremental Unsupervised Domain Adaptation on Evolving Graphs." Proceedings of The 2nd Conference on Lifelong Learning Agents, 2023.

Markdown

[Chung and Ghosh. "Incremental Unsupervised Domain Adaptation on Evolving Graphs." Proceedings of The 2nd Conference on Lifelong Learning Agents, 2023.](https://mlanthology.org/collas/2023/chung2023collas-incremental/)

BibTeX

@inproceedings{chung2023collas-incremental,
  title     = {{Incremental Unsupervised Domain Adaptation on Evolving Graphs}},
  author    = {Chung, Hsing-Huan and Ghosh, Joydeep},
  booktitle = {Proceedings of The 2nd Conference on Lifelong Learning Agents},
  year      = {2023},
  pages     = {683-702},
  volume    = {232},
  url       = {https://mlanthology.org/collas/2023/chung2023collas-incremental/}
}