Watch and Listen: Understanding Audio-Visual-Speech Moments with Multimodal LLM

Abstract

Humans naturally understand moments in a video by integrating visual and auditory cues. For example, localizing a scene in the video like “A scientist passionately speaks on wildlife conservation as dramatic orchestral music plays, with the audience nodding and applauding” requires simultaneous processing of visual, audio, and speech signals. However, existing models often struggle to effectively fuse and interpret audio information, limiting their capacity for comprehensive video temporal understanding. To address this, we present TriSense, a triple-modality large language model designed for holistic video temporal understanding through the integration of visual, audio, and speech modalities. Central to TriSense is a Query-Based Connector that adaptively reweights modality contributions based on the input query, enabling robust performance under modality dropout and allowing flexible combinations of available inputs. To support TriSense's multimodal capabilities, we introduce TriSense-2M, a high-quality dataset of over 2 million curated samples generated via an automated pipeline powered by fine-tuned LLMs. TriSense-2M includes long-form videos and diverse modality combinations, facilitating broad generalization. Extensive experiments across multiple benchmarks demonstrate the effectiveness of TriSense and its potential to advance multimodal video analysis.

Cite

Text

Li et al. "Watch and Listen: Understanding Audio-Visual-Speech Moments with Multimodal LLM." Advances in Neural Information Processing Systems, 2025.

Markdown

[Li et al. "Watch and Listen: Understanding Audio-Visual-Speech Moments with Multimodal LLM." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/li2025neurips-watch/)

BibTeX

@inproceedings{li2025neurips-watch,
  title     = {{Watch and Listen: Understanding Audio-Visual-Speech Moments with Multimodal LLM}},
  author    = {Li, Zinuo and Zhang, Xian and Guo, Yongxin and Bennamoun, Mohammed and Boussaid, Farid and Dwivedi, Girish and Gong, Luqi and Ke, Qiuhong},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/li2025neurips-watch/}
}