Perceive Anything: Recognize, Explain, Caption, and Segment Anything in Images and Videos
Abstract
We present Perceive Anything Model (PAM), a conceptually straightforward and efficient framework for comprehensive region-level visual understanding in images and videos. Our approach extends the powerful segmentation model SAM 2 by integrating Large Language Models (LLMs), enabling simultaneous object segmentation with the generation of diverse, region-specific semantic outputs, including categories, label definition, functional explanations, and detailed captions. A key component, Semantic Perceiver, is introduced to efficiently transform SAM 2's rich visual features, which inherently carry general vision, localization, and semantic priors into multi-modal tokens for LLM comprehension. To support robust multi-granularity understanding, we also develop a dedicated data refinement and augmentation pipeline, yielding a high-quality dataset of 1.5M image and 0.6M video region-semantic annotations, including novel region-level streaming video caption data. PAM is designed for lightweightness and efficiency, while also demonstrates strong performance across a diverse range of region understanding tasks. It runs 1.2$-$2.4$\times$ faster and consumes less GPU memory than prior approaches, offering a practical solution for real-world applications. We believe that our effective approach will serve as a strong baseline for future research in region-level visual understanding.
Cite
Text
Lin et al. "Perceive Anything: Recognize, Explain, Caption, and Segment Anything in Images and Videos." Advances in Neural Information Processing Systems, 2025.Markdown
[Lin et al. "Perceive Anything: Recognize, Explain, Caption, and Segment Anything in Images and Videos." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/lin2025neurips-perceive/)BibTeX
@inproceedings{lin2025neurips-perceive,
title = {{Perceive Anything: Recognize, Explain, Caption, and Segment Anything in Images and Videos}},
author = {Lin, Weifeng and Wei, Xinyu and An, Ruichuan and Ren, Tianhe and Chen, Tingwei and Zhang, Renrui and Guo, Ziyu and Zhang, Wentao and Zhang, Lei and Li, Hongsheng},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/lin2025neurips-perceive/}
}