Breaking the Encoder Barrier for Seamless Video-Language Understanding

Abstract

Most Video-Large Language Models (Video-LLMs) adopt an encoder-decoder framework, where a vision encoder extracts frame-wise features for processing by a language model. However, this approach incurs high computational costs, introduces resolution biases, and struggles to capture fine-grained multimodal interactions. To overcome these limitations, we propose ELVA, an encoder-free Video-LLM that directly models nuanced video-language interactions without relying on a vision encoder. ELVA employs token merging to construct a bottom-up hierarchical representation and incorporates a video guidance supervisor for direct spatiotemporal representation learning. Additionally, a hybrid-resolution mechanism strategically integrates high- and low-resolution frames as inputs to achieve an optimal balance between performance and efficiency. With only 7M publicly available video-text pairs, ELVA achieves competitive performance compared to encoder-based Video-LLMs while reducing FLOPs by up to 95% and inference latency by 92%, offering a scalable and efficient solution for real-time video understanding.

Cite

Text

Li et al. "Breaking the Encoder Barrier for Seamless Video-Language Understanding." International Conference on Computer Vision, 2025.

Markdown

[Li et al. "Breaking the Encoder Barrier for Seamless Video-Language Understanding." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/li2025iccv-breaking/)

BibTeX

@inproceedings{li2025iccv-breaking,
  title     = {{Breaking the Encoder Barrier for Seamless Video-Language Understanding}},
  author    = {Li, Handong and Zhang, Yiyuan and Guo, Longteng and Yue, Xiangyu and Liu, Jing},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {23167-23176},
  url       = {https://mlanthology.org/iccv/2025/li2025iccv-breaking/}
}