PhantomWiki: On-Demand Datasets for Reasoning and Retrieval Evaluation

Abstract

High-quality benchmarks are essential for evaluating reasoning and retrieval capabilities of large language models (LLMs). However, curating datasets for this purpose is not a permanent solution as they are prone to data leakage and inflated performance results. To address these challenges, we propose PhantomWiki: a pipeline to generate unique, factually consistent document corpora with diverse question-answer pairs. Unlike prior work, PhantomWiki is neither a fixed dataset, nor is it based on any existing data. Instead, a new PhantomWiki instance is generated on demand for each evaluation. We vary the question difficulty and corpus size to disentangle reasoning and retrieval capabilities, respectively, and find that PhantomWiki datasets are surprisingly challenging for frontier LLMs. Thus, we contribute a scalable and data leakage-resistant framework for disentangled evaluation of reasoning, retrieval, and tool-use abilities.

Cite

Text

Gong et al. "PhantomWiki: On-Demand Datasets for Reasoning and Retrieval Evaluation." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Gong et al. "PhantomWiki: On-Demand Datasets for Reasoning and Retrieval Evaluation." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/gong2025icml-phantomwiki/)

BibTeX

@inproceedings{gong2025icml-phantomwiki,
  title     = {{PhantomWiki: On-Demand Datasets for Reasoning and Retrieval Evaluation}},
  author    = {Gong, Albert and Stankevičiūtė, Kamilė and Wan, Chao and Kabra, Anmol and Thesmar, Raphael and Lee, Johann and Klenke, Julius and Gomes, Carla P and Weinberger, Kilian Q},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {19964-19995},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/gong2025icml-phantomwiki/}
}