Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation

Abstract

New Natural Langauge Process~(NLP) benchmarks are urgently needed to align with the rapid development of large language models (LLMs). We present Xiezhi, the most comprehensive evaluation suite designed to assess holistic domain knowledge. Xiezhi comprises multiple-choice questions across 516 diverse disciplines ranging from 13 different subjects with 249,587 questions and accompanied by Xiezhi-Specialty with 14,041 questions and Xiezhi-Interdiscipline with 10,746 questions. We conduct evaluation of the 47 cutting-edge LLMs on Xiezhi. Results indicate that LLMs exceed average performance of humans in science, engineering, agronomy, medicine, and art, but fall short in economics, jurisprudence, pedagogy, literature, history, and management. All the evaluation code and data are open sourced in https://github.com/MikeGu721/XiezhiBenchmark

Cite

Text

Gu et al. "Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I16.29767

Markdown

[Gu et al. "Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/gu2024aaai-xiezhi/) doi:10.1609/AAAI.V38I16.29767

BibTeX

@inproceedings{gu2024aaai-xiezhi,
  title     = {{Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation}},
  author    = {Gu, Zhouhong and Zhu, Xiaoxuan and Ye, Haoning and Zhang, Lin and Wang, Jianchen and Zhu, Yixin and Jiang, Sihang and Xiong, Zhuozhi and Li, Zihan and Wu, Weijie and He, Qianyu and Xu, Rui and Huang, Wenhao and Liu, Jingping and Wang, Zili and Wang, Shusen and Zheng, Weiguo and Feng, Hongwei and Xiao, Yanghua},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {18099-18107},
  doi       = {10.1609/AAAI.V38I16.29767},
  url       = {https://mlanthology.org/aaai/2024/gu2024aaai-xiezhi/}
}