Why Does ChatGPT Fall Short in Providing Truthful Answers?

Abstract

Recent advancements in large language models, such as ChatGPT, have demonstrated significant potential to impact various aspects of human life. However, ChatGPT still faces challenges in providing reliable and accurate answers to user questions. To better understand the model’s particular weaknesses in providing truthful answers, we embark an in-depth exploration of open-domain question answering. Specifically, we undertake a detailed examination of ChatGPT’s failures, categorized into: comprehension, factuality, specificity, and inference. We further pinpoint factuality as the most contributing failure and identify two critical abilities associated with factuality: knowledge memorization and knowledge recall. Through experiments focusing on factuality, we propose several potential enhancement strategies. Our findings suggest that augmenting the model with granular external knowledge and cues for knowledge recall can enhance the model’s factuality in answering questions.

Cite

Text

Zheng et al. "Why Does ChatGPT Fall Short in Providing Truthful Answers?." NeurIPS 2023 Workshops: ICBINB, 2023.

Markdown

[Zheng et al. "Why Does ChatGPT Fall Short in Providing Truthful Answers?." NeurIPS 2023 Workshops: ICBINB, 2023.](https://mlanthology.org/neuripsw/2023/zheng2023neuripsw-chatgpt/)

BibTeX

@inproceedings{zheng2023neuripsw-chatgpt,
  title     = {{Why Does ChatGPT Fall Short in Providing Truthful Answers?}},
  author    = {Zheng, Shen and Huang, Jie and Chang, Kevin},
  booktitle = {NeurIPS 2023 Workshops: ICBINB},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/zheng2023neuripsw-chatgpt/}
}