Do We Really Need Complicated Model Architectures for Temporal Networks?

Abstract

Recurrent neural network (RNN) and self-attention mechanism (SAM) are the de facto methods to extract spatial-temporal information for temporal graph learning. Interestingly, we found that although both RNN and SAM could lead to a good performance, in practice neither of them is always necessary. In this paper, we propose GraphMixer, a conceptually and technically simple architecture that consists of three components: (1) a link-encoder that is only based on multi-layer perceptrons (MLP) to summarize the information from temporal links, (2) a node-encoder that is only based on neighbor mean-pooling to summarize node information, and (3) an MLP-based link classifier that performs link prediction based on the outputs of the encoders. Despite its simplicity, GraphMixer attains an outstanding performance on temporal link prediction benchmarks with faster convergence and better generalization performance. These results motivate us to rethink the importance of simpler model architecture.

Cite

Text

Cong et al. "Do We Really Need Complicated Model Architectures for Temporal Networks?." International Conference on Learning Representations, 2023.

Markdown

[Cong et al. "Do We Really Need Complicated Model Architectures for Temporal Networks?." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/cong2023iclr-we/)

BibTeX

@inproceedings{cong2023iclr-we,
  title     = {{Do We Really Need Complicated Model Architectures for Temporal Networks?}},
  author    = {Cong, Weilin and Zhang, Si and Kang, Jian and Yuan, Baichuan and Wu, Hao and Zhou, Xin and Tong, Hanghang and Mahdavi, Mehrdad},
  booktitle = {International Conference on Learning Representations},
  year      = {2023},
  url       = {https://mlanthology.org/iclr/2023/cong2023iclr-we/}
}