SpidR: Learning Fast and Stable Linguistic Units for Spoken Language Models Without Supervision

Abstract

The parallel advances in language modeling and speech representation learning have raised the prospect of learning language directly from speech without textual intermediates. This requires extracting semantic representations directly from speech. Our contributions are threefold. First, we introduce SpidR, a self-supervised speech representation model that efficiently learns representations with highly accessible phonetic information, which makes it particularly suited for textless spoken language modeling. It is trained on raw waveforms using a masked prediction objective combined with self-distillation and online clustering. The intermediate layers of the student model learn to predict assignments derived from the teacher's intermediate layers. This learning objective stabilizes the online clustering procedure compared to previous approaches, resulting in higher quality codebooks. SpidR outperforms wav2vec 2.0, HuBERT, WavLM, and DinoSR on downstream language modeling benchmarks (sWUGGY, sBLIMP, tSC). Second, we systematically evaluate across models and layers the correlation between speech unit quality (ABX, PNMI) and language modeling performance, validating these metrics as reliable proxies. Finally, SpidR significantly reduces pretraining time compared to HuBERT, requiring only one day of pretraining on 16 GPUs, instead of a week. This speedup is enabled by the pretraining method and an efficient codebase, which allows faster iteration and easier experimentation. We open-source the training code and model checkpoints at https://github.com/facebookresearch/spidr.

Cite

Text

Poli et al. "SpidR: Learning Fast and Stable Linguistic Units for Spoken Language Models Without Supervision." Transactions on Machine Learning Research, 2025.

Markdown

[Poli et al. "SpidR: Learning Fast and Stable Linguistic Units for Spoken Language Models Without Supervision." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/poli2025tmlr-spidr/)

BibTeX

@article{poli2025tmlr-spidr,
  title     = {{SpidR: Learning Fast and Stable Linguistic Units for Spoken Language Models Without Supervision}},
  author    = {Poli, Maxime and Luthra, Mahi and Benchekroun, Youssef and Higuchi, Yosuke and Gleize, Martin and Shen, Jiayi and Algayres, Robin and Chung, Yu-An and Assran, Mido and Pino, Juan and Dupoux, Emmanuel},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/poli2025tmlr-spidr/}
}