Coherency Improved Explainable Recommendation via Large Language Model

Abstract

Explainable recommender systems are designed to elucidate the explanation behind each recommendation, enabling users to comprehend the underlying logic. Previous works perform rating prediction and explanation generation in a multi-task manner. However, these works suffer from incoherence between predicted ratings and explanations. To address the issue, we propose a novel framework that employs a large language model (LLM) to generate a rating, transforms it into a rating vector, and finally generates an explanation based on the rating vector and user-item information. Moreover, we propose utilizing publicly available LLMs and pre-trained sentiment analysis models to automatically evaluate the coherence without human annotations. Extensive experimental results on three datasets of explainable recommendation show that the proposed framework is effective, outperforming state-of-the-art baselines with improvements of 7.3% in explainability and 4.4% in text quality.

Cite

Text

Liu et al. "Coherency Improved Explainable Recommendation via Large Language Model." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I11.33329

Markdown

[Liu et al. "Coherency Improved Explainable Recommendation via Large Language Model." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/liu2025aaai-coherency/) doi:10.1609/AAAI.V39I11.33329

BibTeX

@inproceedings{liu2025aaai-coherency,
  title     = {{Coherency Improved Explainable Recommendation via Large Language Model}},
  author    = {Liu, Shijie and Ding, Ruixin and Lu, Weihai and Wang, Jun and Yu, Mo and Shi, Xiaoming and Zhang, Wei},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {12201-12209},
  doi       = {10.1609/AAAI.V39I11.33329},
  url       = {https://mlanthology.org/aaai/2025/liu2025aaai-coherency/}
}