WikiBigEdit: Understanding the Limits of Lifelong Knowledge Editing in LLMs
Abstract
Keeping large language models factually up-to-date is crucial for deployment, yet costly retraining remains a challenge. Knowledge editing offers a promising alternative, but methods are only tested on small-scale or synthetic edit benchmarks. In this work, we aim to bridge research into lifelong knowledge editing to real-world edits at practically relevant scale. We first introduce WikiBigEdit; a large-scale benchmark of real-world Wikidata edits, built to automatically extend lifelong for future-proof benchmarking. In its first instance, it includes over 500K question-answer pairs for knowledge editing alongside a comprehensive evaluation pipeline. Finally, we use WikiBigEdit to study existing knowledge editing techniques’ ability to incorporate large volumes of real-world facts and contrast their capabilities to generic modification techniques such as retrieval augmentation and continual finetuning to acquire a complete picture of the practical extent of current lifelong knowledge editing.
Cite
Text
Thede et al. "WikiBigEdit: Understanding the Limits of Lifelong Knowledge Editing in LLMs." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Thede et al. "WikiBigEdit: Understanding the Limits of Lifelong Knowledge Editing in LLMs." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/thede2025icml-wikibigedit/)BibTeX
@inproceedings{thede2025icml-wikibigedit,
title = {{WikiBigEdit: Understanding the Limits of Lifelong Knowledge Editing in LLMs}},
author = {Thede, Lukas and Roth, Karsten and Bethge, Matthias and Akata, Zeynep and Hartvigsen, Thomas},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {59326-59354},
volume = {267},
url = {https://mlanthology.org/icml/2025/thede2025icml-wikibigedit/}
}