Can Language Models Learn to Listen?

Abstract

We present a framework for generating appropriate facial responses from a listener in dyadic social interactions based on the speaker's words. Given an input transcription of the speaker's words with their timestamps, our approach autoregressively predicts a response of a listener: a sequence of listener facial gestures, quantized using a VQ-VAE. Since gesture is a language component, we propose treating the quantized atomic motion elements as additional language token inputs to a transformer-based large language model. Initializing our transformer with the weights of a language model pre-trained only on text results in significantly higher quality listener responses than training a transformer from scratch. We show that our generated listener motion is fluent and reflective of language semantics through quantitative metrics and a qualitative user study. In our evaluation, we analyze the model's ability to utilize temporal and semantic aspects of spoken text.

Cite

Text

Ng et al. "Can Language Models Learn to Listen?." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.00925

Markdown

[Ng et al. "Can Language Models Learn to Listen?." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/ng2023iccv-language/) doi:10.1109/ICCV51070.2023.00925

BibTeX

@inproceedings{ng2023iccv-language,
  title     = {{Can Language Models Learn to Listen?}},
  author    = {Ng, Evonne and Subramanian, Sanjay and Klein, Dan and Kanazawa, Angjoo and Darrell, Trevor and Ginosar, Shiry},
  booktitle = {International Conference on Computer Vision},
  year      = {2023},
  pages     = {10083-10093},
  doi       = {10.1109/ICCV51070.2023.00925},
  url       = {https://mlanthology.org/iccv/2023/ng2023iccv-language/}
}