Fine-Tuning Large Language Model Based Explainable Recommendation with Explainable Quality Reward

Abstract

Large language model-based explainable recommendation (LLM-based ER) systems can provide remarkable human-like explanations and have widely received attention from researchers. However, the original LLM-based ER systems face three low-quality problems in their generated explanations, i.e., lack of personalization, inconsistency, and questionable explanation data. To address these problems, we propose a novel LLM-based ER model denoted as LLM2ER to serve as a backbone and devise two innovative explainable quality reward models for fine-tuning such a backbone in a reinforcement learning paradigm, ultimately yielding a fine-tuned model denoted as LLM2ER-EQR, which can provide high-quality explanations. LLM2ER-EQR can generate personalized, informative, and consistent high-quality explanations learned from questionable-quality explanation datasets. Extensive experiments conducted on three real-world datasets demonstrate that our model can generate fluent, diverse, informative, and highly personalized explanations.

Cite

Text

Yang et al. "Fine-Tuning Large Language Model Based Explainable Recommendation with Explainable Quality Reward." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I8.28777

Markdown

[Yang et al. "Fine-Tuning Large Language Model Based Explainable Recommendation with Explainable Quality Reward." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/yang2024aaai-fine/) doi:10.1609/AAAI.V38I8.28777

BibTeX

@inproceedings{yang2024aaai-fine,
  title     = {{Fine-Tuning Large Language Model Based Explainable Recommendation with Explainable Quality Reward}},
  author    = {Yang, Mengyuan and Zhu, Mengying and Wang, Yan and Chen, Linxun and Zhao, Yilei and Wang, Xiuyuan and Han, Bing and Zheng, Xiaolin and Yin, Jianwei},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {9250-9259},
  doi       = {10.1609/AAAI.V38I8.28777},
  url       = {https://mlanthology.org/aaai/2024/yang2024aaai-fine/}
}