Scaling Transformers for Low-Bitrate High-Quality Speech Coding
Abstract
The tokenization of audio with neural audio codec models is a vital part of modern AI pipelines for the generation or understanding of speech, alone or in a multimodal context. Traditionally such tokenization models have concentrated on low parameter-count architectures using only components with strong inductive biases. In this work we show that by applying a transformer architecture with large parameter count to this problem, and applying a flexible Finite Scalar Quantization (FSQ) based bottleneck, it is possible to reach state-of-the-art speech quality at extremely low bit-rates of $400$ or $700$ bits-per-second. The trained models strongly out-perform existing baselines in both objective and subjective tests.
Cite
Text
Parker et al. "Scaling Transformers for Low-Bitrate High-Quality Speech Coding." International Conference on Learning Representations, 2025.Markdown
[Parker et al. "Scaling Transformers for Low-Bitrate High-Quality Speech Coding." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/parker2025iclr-scaling/)BibTeX
@inproceedings{parker2025iclr-scaling,
title = {{Scaling Transformers for Low-Bitrate High-Quality Speech Coding}},
author = {Parker, Julian D and Smirnov, Anton and Pons, Jordi and Carr, Cj and Zukowski, Zack and Evans, Zach and Liu, Xubo},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/parker2025iclr-scaling/}
}