VRVQ: Variable Bitrate Residual Vector Quantization for Audio Compression

Abstract

Recent state-of-the-art neural audio compression models have progressively adopted residual vector quantization (RVQ). Despite this success, these models employ a fixed number of codebooks per frame, which can be suboptimal in terms of rate-distortion tradeoff, particularly in scenarios with simple input audio, such as silence. To address this limitation, we propose variable bitrate RVQ (VRVQ) for audio codecs, which allows for more efficient coding by adapting the number of codebooks used per frame. Furthermore, we propose a gradient estimation method for the non-differentiable masking operation that transforms from the importance map to the binary importance mask, improving model training via a straight-through estimator. We demonstrate that the proposed training framework achieves superior results compared to the baseline method and shows further improvement when applied to the current state-of-the-art codec.

Cite

Text

Chae et al. "VRVQ: Variable Bitrate Residual Vector Quantization for Audio Compression." NeurIPS 2024 Workshops: Compression, 2024.

Markdown

[Chae et al. "VRVQ: Variable Bitrate Residual Vector Quantization for Audio Compression." NeurIPS 2024 Workshops: Compression, 2024.](https://mlanthology.org/neuripsw/2024/chae2024neuripsw-vrvq/)

BibTeX

@inproceedings{chae2024neuripsw-vrvq,
  title     = {{VRVQ: Variable Bitrate Residual Vector Quantization for Audio Compression}},
  author    = {Chae, Yunkee and Choi, Woosung and Takida, Yuhta and Koo, Junghyun and Ikemiya, Yukara and Zhong, Zhi and Cheuk, Kin Wai and Martínez-Ramírez, Marco A. and Lee, Kyogu and Liao, Wei-Hsiang and Mitsufuji, Yuki},
  booktitle = {NeurIPS 2024 Workshops: Compression},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/chae2024neuripsw-vrvq/}
}