Language Repository for Long Video Understanding

Abstract

Language has become a prominent modality in computer vision with the rise of multi-modal LLMs. Despite supporting long context-lengths, their effectiveness in handling long-term information gradually declines with input length. This becomes critical, especially in applications such as long-form video understanding. In this paper, we introduce a Language Repository (LangRepo) for LLMs, that maintains concise and structured information as an interpretable (i.e., all-textual) representation. It consists of write and read operations that focus on pruning redundancies in text, and extracting information at various temporal scales. The proposed framework is evaluated on zero-shot video VQA benchmarks, showing state-of-the-art performance at its scale. Our code is available at https://github.com/kkahatapitiya/LangRepo.

Cite

Text

Kahatapitiya et al. "Language Repository for Long Video Understanding." NeurIPS 2024 Workshops: Video-Langauge_Models, 2024.

Markdown

[Kahatapitiya et al. "Language Repository for Long Video Understanding." NeurIPS 2024 Workshops: Video-Langauge_Models, 2024.](https://mlanthology.org/neuripsw/2024/kahatapitiya2024neuripsw-language/)

BibTeX

@inproceedings{kahatapitiya2024neuripsw-language,
  title     = {{Language Repository for Long Video Understanding}},
  author    = {Kahatapitiya, Kumara and Ranasinghe, Kanchana and Park, Jongwoo and Ryoo, Michael S},
  booktitle = {NeurIPS 2024 Workshops: Video-Langauge_Models},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/kahatapitiya2024neuripsw-language/}
}