SWEb: A Large Web Dataset for the Scandinavian Languages

Abstract

This paper presents the hitherto largest pretraining dataset for the Scandinavian languages: the Scandinavian WEb (SWEb), comprising over one trillion tokens. The paper details the collection and processing pipeline, and introduces a novel model-based text extractor that significantly reduces complexity in comparison with rule-based approaches. We also introduce a new cloze-style benchmark for evaluating language models in Swedish, and use this test to compare models trained on the SWEb data to models trained on FineWeb, with competitive results. All data, models and code are shared openly.

Cite

Text

Norlund et al. "SWEb: A Large Web Dataset for the Scandinavian Languages." International Conference on Learning Representations, 2025.

Markdown

[Norlund et al. "SWEb: A Large Web Dataset for the Scandinavian Languages." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/norlund2025iclr-sweb/)

BibTeX

@inproceedings{norlund2025iclr-sweb,
  title     = {{SWEb: A Large Web Dataset for the Scandinavian Languages}},
  author    = {Norlund, Tobias and Isbister, Tim and Gyllensten, Amaru Cuba and dos Santos, Paul Gabriel and Petrelli, Danila and Ekgren, Ariel and Sahlgren, Magnus},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/norlund2025iclr-sweb/}
}