Teaching Language Models to Hallucinate Less with Synthetic Tasks

Abstract

Large language models (LLMs) frequently hallucinate on abstractive summarization tasks such as document-based question-answering, meeting summarization, and clinical report generation, even though all necessary information is included in context. However, optimizing to make LLMs hallucinate less is challenging, as hallucination is hard to efficiently, cheaply, and reliably evaluate at each optimization step. In this work, we show that reducing hallucination on a _synthetic task_ can also reduce hallucination on real-world downstream tasks. Our method, SynTra, first designs a synthetic task where hallucinations are easy to elicit and measure. It next optimizes the LLM's system message via prefix tuning on the synthetic task, then uses the system message on realistic, hard-to-optimize tasks. Across three realistic abstractive summarization tasks, we reduce hallucination for two 13B-parameter LLMs using supervision signal from only a synthetic retrieval task. We also find that optimizing the system message rather than the model weights can be critical; fine-tuning the entire model on the synthetic task can counterintuitively _increase_ hallucination. Overall, SynTra demonstrates that the extra flexibility of working with synthetic data can help mitigate undesired behaviors in practice.

Cite

Text

Jones et al. "Teaching Language Models to Hallucinate Less with Synthetic Tasks." International Conference on Learning Representations, 2024.

Markdown

[Jones et al. "Teaching Language Models to Hallucinate Less with Synthetic Tasks." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/jones2024iclr-teaching/)

BibTeX

@inproceedings{jones2024iclr-teaching,
  title     = {{Teaching Language Models to Hallucinate Less with Synthetic Tasks}},
  author    = {Jones, Erik and Palangi, Hamid and Ribeiro, Clarisse Simões and Chandrasekaran, Varun and Mukherjee, Subhabrata and Mitra, Arindam and Awadallah, Ahmed Hassan and Kamar, Ece},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/jones2024iclr-teaching/}
}