ContextGNN: Beyond Two-Tower Recommendation Systems
Abstract
Recommendation systems predominantly utilize two-tower architectures, which evaluate user-item rankings through the inner product of their respective embeddings. However, one key limitation of two-tower models is that they learn a pair-agnostic representation of users and items. In contrast, pair-wise representations either scale poorly due to their quadratic complexity or are too restrictive on the candidate pairs to rank. To address these issues, we introduce Context-based Graph Neural Networks (ContextGNNs), a novel deep learning architecture for link prediction in recommendation systems. The method employs a pair-wise representation technique for familiar items situated within a user's local subgraph, while leveraging two-tower representations to facilitate the recommendation of exploratory items. A final network then predicts how to fuse both pair-wise and two-tower recommendations into a single ranking of items. We demonstrate that ContextGNN is able to adapt to different data characteristics and outperforms existing methods, both traditional and GNN-based, on a diverse set of practical recommendation tasks, improving performance by 20\% on average.
Cite
Text
Yuan et al. "ContextGNN: Beyond Two-Tower Recommendation Systems." International Conference on Learning Representations, 2025.Markdown
[Yuan et al. "ContextGNN: Beyond Two-Tower Recommendation Systems." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/yuan2025iclr-contextgnn/)BibTeX
@inproceedings{yuan2025iclr-contextgnn,
title = {{ContextGNN: Beyond Two-Tower Recommendation Systems}},
author = {Yuan, Yiwen and Zhang, Zecheng and He, Xinwei and Nitta, Akihiro and Hu, Weihua and Shah, Manan and Stojanovič, Blaž and Huang, Shenyang and Lenssen, Jan Eric and Leskovec, Jure and Fey, Matthias},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/yuan2025iclr-contextgnn/}
}