Contrastive Learning Reduces Hallucination in Conversations
Abstract
Pre-trained language models (LMs) store knowledge in their parameters and can generate informative responses when used in conversational systems. However, LMs suffer from the problem of “hallucination:” they may generate plausible-looking statements that are irrelevant or factually incorrect. To address this problem, we propose a contrastive learning scheme, named MixCL. A novel mixed contrastive objective is proposed to explicitly optimize the implicit knowledge elicitation process of LMs, and thus reduce their hallucination in conversations. We also examine negative sampling strategies of retrieved hard negatives and model-generated negatives. We conduct experiments on Wizard-of-Wikipedia, a public, open-domain knowledge-grounded dialogue benchmark, and assess the effectiveness of MixCL. MixCL effectively reduces the hallucination of LMs in conversations and achieves the highest performance among LM-based dialogue agents in terms of relevancy and factuality. We show that MixCL achieves comparable performance to state-of-the-art KB-based approaches while enjoying notable advantages in terms of efficiency and scalability.
Cite
Text
Sun et al. "Contrastive Learning Reduces Hallucination in Conversations." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I11.26596Markdown
[Sun et al. "Contrastive Learning Reduces Hallucination in Conversations." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/sun2023aaai-contrastive/) doi:10.1609/AAAI.V37I11.26596BibTeX
@inproceedings{sun2023aaai-contrastive,
title = {{Contrastive Learning Reduces Hallucination in Conversations}},
author = {Sun, Weiwei and Shi, Zhengliang and Gao, Shen and Ren, Pengjie and de Rijke, Maarten and Ren, Zhaochun},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {13618-13626},
doi = {10.1609/AAAI.V37I11.26596},
url = {https://mlanthology.org/aaai/2023/sun2023aaai-contrastive/}
}