Describe Anything: Detailed Localized Image and Video Captioning
Abstract
Generating detailed and accurate descriptions for specific regions in images and videos remains a fundamental challenge for vision-language models. We introduce the Describe Anything Model (DAM), a model designed for detailed localized captioning (DLC). DAM preserves both local details and global context through two key innovations: a focal prompt, which ensures high-resolution encoding of targeted regions, and a localized vision backbone, which integrates precise localization with its broader context. To tackle the scarcity of high-quality DLC data, we propose a Semi-supervised learning (SSL)-based Data Pipeline (DLC-SDP). DLC-SDP starts with existing segmentation datasets and expands to unlabeled web images using SSL. We introduce DLC-Bench, a benchmark designed to evaluate DLC without relying on reference captions. DAM sets new state-of-the-art on 7 benchmarks spanning keyword-level, phrase-level, and detailed multi-sentence localized image and video captioning.
Cite
Text
Lian et al. "Describe Anything: Detailed Localized Image and Video Captioning." International Conference on Computer Vision, 2025.Markdown
[Lian et al. "Describe Anything: Detailed Localized Image and Video Captioning." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/lian2025iccv-describe/)BibTeX
@inproceedings{lian2025iccv-describe,
title = {{Describe Anything: Detailed Localized Image and Video Captioning}},
author = {Lian, Long and Ding, Yifan and Ge, Yunhao and Liu, Sifei and Mao, Hanzi and Li, Boyi and Pavone, Marco and Liu, Ming-Yu and Darrell, Trevor and Yala, Adam and Cui, Yin},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {21766-21777},
url = {https://mlanthology.org/iccv/2025/lian2025iccv-describe/}
}