CFEVER: A Chinese Fact Extraction and VERification Dataset
Abstract
We present CFEVER, a Chinese dataset designed for Fact Extraction and VERification. CFEVER comprises 30,012 manually created claims based on content in Chinese Wikipedia. Each claim in CFEVER is labeled as “Supports”, “Refutes”, or “Not Enough Info” to depict its degree of factualness. Similar to the FEVER dataset, claims in the “Supports” and “Refutes” categories are also annotated with corresponding evidence sentences sourced from single or multiple pages in Chinese Wikipedia. Our labeled dataset holds a Fleiss’ kappa value of 0.7934 for five-way inter-annotator agreement. In addition, through the experiments with the state-of-the-art approaches developed on the FEVER dataset and a simple baseline for CFEVER, we demonstrate that our dataset is a new rigorous benchmark for factual extraction and verification, which can be further used for developing automated systems to alleviate human fact-checking efforts. CFEVER is available at https://ikmlab.github.io/CFEVER.
Cite
Text
Lin et al. "CFEVER: A Chinese Fact Extraction and VERification Dataset." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I17.29825Markdown
[Lin et al. "CFEVER: A Chinese Fact Extraction and VERification Dataset." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/lin2024aaai-cfever/) doi:10.1609/AAAI.V38I17.29825BibTeX
@inproceedings{lin2024aaai-cfever,
title = {{CFEVER: A Chinese Fact Extraction and VERification Dataset}},
author = {Lin, Ying-Jia and Lin, Chun-Yi and Yeh, Chia-Jen and Li, Yi-Ting and Hu, Yun-Yu and Hsu, Chih-Hao and Lee, Mei-Feng and Kao, Hung-Yu},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {18626-18634},
doi = {10.1609/AAAI.V38I17.29825},
url = {https://mlanthology.org/aaai/2024/lin2024aaai-cfever/}
}