Unhackable Temporal Reward for Scalable Video MLLMs

Abstract

In the pursuit of superior video-processing MLLMs, we have encountered a perplexing paradox: the “anti-scaling law”, where more data and larger models lead to worse performance. This study unmasks the culprit: “temporal hacking”, a phenomenon where models shortcut by fixating on select frames, missing the full video narrative. In this work, we systematically establish a comprehensive theory of temporal hacking, defining it from a reinforcement learning perspective, introducing the Temporal Perplexity (TPL) score to assess this misalignment, and proposing the Unhackable Temporal Rewarding (UTR) framework to mitigate the temporal hacking. Both theoretically and empirically, TPL proves to be a reliable indicator of temporal modeling quality, correlating strongly with frame activation patterns. Extensive experiments reveal that UTR not only counters temporal hacking but significantly elevates video comprehension capabilities. This work not only advances video-AI systems but also illuminates the critical importance of aligning proxy rewards with true objectives in MLLM development.

Cite

Text

Yu et al. "Unhackable Temporal Reward for Scalable Video MLLMs." International Conference on Learning Representations, 2025.

Markdown

[Yu et al. "Unhackable Temporal Reward for Scalable Video MLLMs." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/yu2025iclr-unhackable/)

BibTeX

@inproceedings{yu2025iclr-unhackable,
  title     = {{Unhackable Temporal Reward for Scalable Video MLLMs}},
  author    = {Yu, En and Lin, Kangheng and Zhao, Liang and Wei, Yana and Zhu, Zining and Wei, Haoran and Sun, Jianjian and Ge, Zheng and Zhang, Xiangyu and Wang, Jingyu and Tao, Wenbing},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/yu2025iclr-unhackable/}
}