DCAD-2000: A Multilingual Dataset Across 2000+ Languages with Data Cleaning as Anomaly Detection
Abstract
The rapid development of multilingual large language models (LLMs) highlights the need for high-quality, diverse, and well-curated multilingual datasets. In this paper, we introduce DCAD-2000 (Data Cleaning as Anomaly Detection), a large-scale multilingual corpus constructed from newly extracted Common Crawl data and existing multilingual sources. DCAD-2000 covers 2,282 languages, 46.72TB of text, and 8.63 billion documents, spanning 155 high- and medium-resource languages and 159 writing scripts. To overcome the limitations of existing data cleaning approaches, which rely on manually designed heuristic thresholds, we reframe data cleaning as an anomaly detection problem. This dynamic filtering paradigm substantially improves data quality by automatically identifying and removing noisy or anomalous content. By fine-tuning LLMs on DCAD-2000, we demonstrate notable improvements in data quality, robustness of the cleaning pipeline, and downstream performance, particularly for low-resource languages across multiple multilingual benchmarks.
Cite
Text
Lai et al. "DCAD-2000: A Multilingual Dataset Across 2000+ Languages with Data Cleaning as Anomaly Detection." Advances in Neural Information Processing Systems, 2025.Markdown
[Lai et al. "DCAD-2000: A Multilingual Dataset Across 2000+ Languages with Data Cleaning as Anomaly Detection." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/lai2025neurips-dcad2000/)BibTeX
@inproceedings{lai2025neurips-dcad2000,
title = {{DCAD-2000: A Multilingual Dataset Across 2000+ Languages with Data Cleaning as Anomaly Detection}},
author = {Lai, Wen and Shen, Yingli and Wang, Shuo and Zhang, Xueren and Luo, Kangyang and Fraser, Alexander and Sun, Maosong},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/lai2025neurips-dcad2000/}
}