Spoken Question Answering and Speech Continuation Using Spectrogram-Powered LLM

Abstract

We present Spectron, a novel approach to adapting pre-trained large language models (LLMs) to perform spoken question answering (QA) and speech continuation. By endowing the LLM with a pre-trained speech encoder, our model becomes able to take speech inputs and generate speech outputs. The entire system is trained end-to-end and operates directly on spectrograms, simplifying our architecture. Key to our approach is a training objective that jointly supervises speech recognition, text continuation, and speech synthesis using only paired speech-text pairs, enabling a `cross-modal' chain-of-thought within a single decoding pass. Our method surpasses existing spoken language models in speaker preservation and semantic coherence. Furthermore, the proposed model improves upon direct initialization in retaining the knowledge of the original LLM as demonstrated through spoken QA datasets. We release our audio samples and spoken QA dataset via our website.

Cite

Text

Nachmani et al. "Spoken Question Answering and Speech Continuation Using Spectrogram-Powered LLM." International Conference on Learning Representations, 2024.

Markdown

[Nachmani et al. "Spoken Question Answering and Speech Continuation Using Spectrogram-Powered LLM." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/nachmani2024iclr-spoken/)

BibTeX

@inproceedings{nachmani2024iclr-spoken,
  title     = {{Spoken Question Answering and Speech Continuation Using Spectrogram-Powered LLM}},
  author    = {Nachmani, Eliya and Levkovitch, Alon and Hirsch, Roy and Salazar, Julian and Asawaroengchai, Chulayuth and Mariooryad, Soroosh and Rivlin, Ehud and Skerry-Ryan, Rj and Ramanovich, Michelle Tadmor},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/nachmani2024iclr-spoken/}
}