Learning Contrastive Multi-View Graphs for Recommendation (Student Abstract)

Abstract

This paper exploits self-supervised learning (SSL) to learn more accurate and robust representations from the user-item interaction graph. Particularly, we propose a novel SSL model that effectively leverages contrastive multi-view learning and pseudo-siamese network to construct a pre-training and post-training framework. Moreover, we present three graph augmentation techniques during the pre-training stage and explore the effects of combining different augmentations, which allow us to learn general and robust representations for the GNN-based recommendation. Simple experimental evaluations on real-world datasets show that the proposed solution significantly improves the recommendation accuracy, especially for sparse data, and is also noise resistant.

Cite

Text

Cheng et al. "Learning Contrastive Multi-View Graphs for Recommendation (Student Abstract)." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I11.21600

Markdown

[Cheng et al. "Learning Contrastive Multi-View Graphs for Recommendation (Student Abstract)." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/cheng2022aaai-learning/) doi:10.1609/AAAI.V36I11.21600

BibTeX

@inproceedings{cheng2022aaai-learning,
  title     = {{Learning Contrastive Multi-View Graphs for Recommendation (Student Abstract)}},
  author    = {Cheng, Zhangtao and Zhong, Ting and Zhang, Kunpeng and Walker, Joojo and Zhou, Fan},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {12927-12928},
  doi       = {10.1609/AAAI.V36I11.21600},
  url       = {https://mlanthology.org/aaai/2022/cheng2022aaai-learning/}
}