Discrete Audio Tokens: More than a Survey!
Abstract
Discrete audio tokens are compact representations that aim to preserve perceptual quality, phonetic content, and speaker characteristics while enabling efficient storage and inference, as well as competitive performance across diverse downstream tasks. They provide a practical alternative to continuous features, enabling the integration of speech and audio into modern large language models (LLMs). As interest in token-based audio processing grows, various tokenization methods have emerged, and several surveys have reviewed the latest progress in the field. However, existing studies often focus on specific domains or tasks and lack a unified comparison across various benchmarks. This paper presents a systematic review and benchmark of discrete audio tokenizers, covering three domains: speech, music, and general audio. We propose a taxonomy of tokenization approaches based on encoder-decoder, quantization techniques, training paradigm, streamability, and application domains. We evaluate tokenizers on multiple benchmarks for reconstruction, downstream performance, and acoustic language modeling, and analyze trade-offs through controlled ablation studies. Our findings highlight key limitations, practical considerations, and open challenges, providing insight and guidance for future research in this rapidly evolving area. For more information, including our main results and tokenizer database, please refer to our website: https://poonehmousavi.github.io/dates-website/.
Cite
Text
Mousavi et al. "Discrete Audio Tokens: More than a Survey!." Transactions on Machine Learning Research, 2025.Markdown
[Mousavi et al. "Discrete Audio Tokens: More than a Survey!." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/mousavi2025tmlr-discrete/)BibTeX
@article{mousavi2025tmlr-discrete,
title = {{Discrete Audio Tokens: More than a Survey!}},
author = {Mousavi, Pooneh and Maimon, Gallil and Moumen, Adel and Petermann, Darius and Shi, Jiatong and Wu, Haibin and Yang, Haici and Kuznetsova, Anastasia and Ploujnikov, Artem and Marxer, Ricard and Ramabhadran, Bhuvana and Elizalde, Benjamin and Lugosch, Loren and Li, Jinyu and Subakan, Cem and Woodland, Phil and Kim, Minje and Lee, Hung-yi and Watanabe, Shinji and Adi, Yossi and Ravanelli, Mirco},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/mousavi2025tmlr-discrete/}
}