Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning

Abstract

Recent studies have identified one aggravating factor of LLM hallucinations as the knowledge inconsistency between pre-training and fine-tuning, where unfamiliar fine-tuning data mislead the LLM to fabricate plausible but wrong outputs. In this paper, we propose a novel fine-tuning strategy called Prereq-Tune to address this knowledge inconsistency and reduce hallucinations. Fundamentally, Prereq-Tune disentangles the learning of skills and knowledge, so the model learns only the task skills without being impacted by the knowledge inconsistency. To achieve this, Prereq-Tune introduces an additional prerequisite learning stage to learn the necessary knowledge for SFT, allowing subsequent SFT to focus only on task skills. Prereq-Tune can also be combined with fictitious synthetic data to enhance the grounding of LLM outputs to their internal knowledge. Experiments show that Prereq-Tune outperforms existing baselines in improving LLM's factuality across short QA and long-form generation tasks. It also opens new possibilities for knowledge-controlled generation in LLMs. Our code is available at https://github.com/UCSB-NLP-Chang/Prereq_tune.git.

Cite

Text

Liu et al. "Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning." International Conference on Learning Representations, 2025.

Markdown

[Liu et al. "Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/liu2025iclr-fictitious/)

BibTeX

@inproceedings{liu2025iclr-fictitious,
  title     = {{Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning}},
  author    = {Liu, Yujian and Chang, Shiyu and Jaakkola, Tommi and Zhang, Yang},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/liu2025iclr-fictitious/}
}