Unifying Specialized Visual Encoders for Video Language Models

Abstract

Recent advances in vision backbones have yielded powerful and diverse visual and video encoders. Yet, current Video Large Language Models encode visual inputs using an encoder from a single backbone family, limiting the amount and type of visual information they can process. We propose MERV, a Multi-Encoder Video Representation, which utilizes multiple encoders for a comprehensive video representation. To optimize heterogeneous features from a broad spectrum of encoders and ensure efficient and coherent feature integration, MERV first aligns encoder features spatio-temporally, then projects them into a unified structure, and finally fuses them through cross-attention. Under fair comparison, MERV achieves up to 4.62% higher accuracy than its base model, while introducing minimal extra parameters and training faster than equivalent single-encoder methods after parallelizing visual processing. Qualitative analysis shows MERV successfully captures and integrates domain knowledge from each encoder, opening new possibilities for scaling enhanced video understanding.

Cite

Text

Chung et al. "Unifying Specialized Visual Encoders for Video Language Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Chung et al. "Unifying Specialized Visual Encoders for Video Language Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/chung2025icml-unifying/)

BibTeX

@inproceedings{chung2025icml-unifying,
  title     = {{Unifying Specialized Visual Encoders for Video Language Models}},
  author    = {Chung, Jihoon and Zhu, Tyler and Saez-Diez, Max Gonzalez and Niebles, Juan Carlos and Zhou, Honglu and Russakovsky, Olga},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {10879-10900},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/chung2025icml-unifying/}
}