Fact or Fiction? Can LLMs Be Reliable Annotators for Political Truths?

Abstract

Political misinformation poses significant challenges to democratic processes, shaping public opinion and trust in media. Manual fact-checking methods face issues of scalability and annotator bias, while machine learning models require large, costly labelled datasets. This study investigates the use of state-of-the-art large language models (LLMs) as reliable annotators for detecting political factuality in news articles. Using open-source LLMs, we create a politically diverse dataset, labelled for bias through LLM-generated annotations. These annotations are validated by human experts and further evaluated by LLM-based judges to assess the accuracy and reliability of the annotations. Our approach offers a scalable and robust alternative to traditional fact-checking, enhancing transparency and public trust in media.

Cite

Text

Chatrath et al. "Fact or Fiction? Can LLMs Be Reliable Annotators for Political Truths?." NeurIPS 2024 Workshops: SoLaR, 2024.

Markdown

[Chatrath et al. "Fact or Fiction? Can LLMs Be Reliable Annotators for Political Truths?." NeurIPS 2024 Workshops: SoLaR, 2024.](https://mlanthology.org/neuripsw/2024/chatrath2024neuripsw-fact/)

BibTeX

@inproceedings{chatrath2024neuripsw-fact,
  title     = {{Fact or Fiction? Can LLMs Be Reliable Annotators for Political Truths?}},
  author    = {Chatrath, Veronica and Lotif, Marcelo and Raza, Shaina},
  booktitle = {NeurIPS 2024 Workshops: SoLaR},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/chatrath2024neuripsw-fact/}
}