Can LLMs Outshine Conventional Recommenders? a Comparative Evaluation

Abstract

Integrating large language models (LLMs) into recommender systems has created new opportunities for improving recommendation quality. However, a comprehensive benchmark is needed to thoroughly evaluate and compare the recommendation capabilities of LLMs with traditional recommender systems. In this paper, we introduce \recbench{}, which systematically investigates various item representation forms (including unique identifier, text, semantic embedding, and semantic identifier) and evaluates two primary recommendation tasks, i.e., click-through rate prediction (CTR) and sequential recommendation (SeqRec). Our extensive experiments cover up to 17 large models and are conducted across five diverse datasets from fashion, news, video, books, and music domains. Our findings indicate that LLM-based recommenders outperform conventional recommenders, achieving up to a 5% AUC improvement in CTR and up to a 170% NDCG@10 improvement in SeqRec. However, these substantial performance gains come at the expense of significantly reduced inference efficiency, rendering LLMs impractical as real-time recommenders. We have released our code and data to enable other researchers to reproduce and build upon our experimental results.

Cite

Text

Liu et al. "Can LLMs Outshine Conventional Recommenders? a Comparative Evaluation." Advances in Neural Information Processing Systems, 2025.

Markdown

[Liu et al. "Can LLMs Outshine Conventional Recommenders? a Comparative Evaluation." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/liu2025neurips-llms/)

BibTeX

@inproceedings{liu2025neurips-llms,
  title     = {{Can LLMs Outshine Conventional Recommenders? a Comparative Evaluation}},
  author    = {Liu, Qijiong and Zhu, Jieming and Fan, Lu and Wang, Kun and Hu, Hengchang and Guo, Wei and Liu, Yong and Wu, Xiao-Ming},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/liu2025neurips-llms/}
}