Preference Discerning with LLM-Enhanced Generative Retrieval
Abstract
In sequential recommendation, models recommend items based on user's interaction history. To this end, current models usually incorporate information such as item descriptions and user intent or preferences. User preferences are usually not explicitly given in open-source datasets, and thus need to be approximated, for example via large language models (LLMs). Current approaches leverage approximated user preferences only during training and rely solely on the past interaction history for recommendations, limiting their ability to dynamically adapt to changing preferences, potentially reinforcing echo chambers. To address this issue, we propose a new paradigm, namely *preference discerning*, which explicitly conditions a generative recommendation model on user preferences in natural language within its context. To evaluate *preference discerning*, we introduce a novel benchmark that provides a holistic evaluation across various scenarios, including preference steering and sentiment following. Upon evaluating current state-of-the-art methods on our benchmark, we discover that their ability to dynamically adapt to evolving user preferences is limited. To address this, we propose a new method named Mender (**M**ultimodal Prefer**en**ce **D**iscern**er**), which achieves state-of-the-art performance in our benchmark. Our results show that Mender effectively adapts its recommendation guided by human preferences, even if not observed during training, paving the way toward more flexible recommendation models.
Cite
Text
Paischer et al. "Preference Discerning with LLM-Enhanced Generative Retrieval." Transactions on Machine Learning Research, 2025.Markdown
[Paischer et al. "Preference Discerning with LLM-Enhanced Generative Retrieval." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/paischer2025tmlr-preference/)BibTeX
@article{paischer2025tmlr-preference,
title = {{Preference Discerning with LLM-Enhanced Generative Retrieval}},
author = {Paischer, Fabian and Yang, Liu and Liu, Linfeng and Shao, Shuai and Hassani, Kaveh and Li, Jiacheng and Chen, Ricky T. Q. and Li, Zhang Gabriel and Gao, Xiaoli and Shao, Wei and Feng, Xue and Noorshams, Nima and Park, Sem and Long, Bo and Eghbalzadeh, Hamid},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/paischer2025tmlr-preference/}
}