Citations and Trust in LLM Generated Responses

Abstract

Question answering systems are rapidly advancing, but their opaque nature may impact user trust. We explored trust through an anti-monitoring framework, where trust is predicted to be correlated with presence of citations and inversely related to checking citations. We tested this hypothesis with a live question-answering experiment that presented text responses generated using a commercial Chatbot along with varying citations (zero, one, or five), both relevant and random, and recorded if participants checked the citations and their self-reported trust in the generated responses. We found a significant increase in trust when citations were present, a result that held true even when the citations were random; we also found a significant decrease in trust when participants checked the citations. These results highlight the importance of citations in enhancing trust in AI-generated content.

Cite

Text

Ding et al. "Citations and Trust in LLM Generated Responses." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I22.34550

Markdown

[Ding et al. "Citations and Trust in LLM Generated Responses." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/ding2025aaai-citations/) doi:10.1609/AAAI.V39I22.34550

BibTeX

@inproceedings{ding2025aaai-citations,
  title     = {{Citations and Trust in LLM Generated Responses}},
  author    = {Ding, Yifan and Facciani, Matthew and Joyce, Ellen and Poudel, Amrit and Bhattacharya, Sanmitra and Veeramani, Balaji and Aguiñaga, Sal and Weninger, Tim},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {23787-23795},
  doi       = {10.1609/AAAI.V39I22.34550},
  url       = {https://mlanthology.org/aaai/2025/ding2025aaai-citations/}
}