Pixel-Aligned Language Model

Abstract

Large language models have achieved great success in recent years so as their variants in vision. Existing vision-language models can describe images in natural languages answer visual-related questions or perform complex reasoning about the image. However it is yet unclear how localization tasks such as word grounding or referring localization can be performed using large language models. In this work we aim to develop a vision-language model that can take locations for example a set of points or boxes as either inputs or outputs. When taking locations as inputs the model performs location-conditioned captioning which generates captions for the indicated object or region. When generating locations as outputs our model regresses pixel coordinates for each output word generated by the language model and thus performs dense word grounding. Our model is pre-trained on the Localized Narrative dataset which contains pixel-word-aligned captioning from human attention. We show our model can be applied to various location-aware vision-language tasks including referring localization location-conditioned captioning and dense object captioning archiving state-of-the-art performance on RefCOCO and Visual Genome.

Cite

Text

Xu et al. "Pixel-Aligned Language Model." Conference on Computer Vision and Pattern Recognition, 2024.

Markdown

[Xu et al. "Pixel-Aligned Language Model." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/xu2024cvpr-pixelaligned/)

BibTeX

@inproceedings{xu2024cvpr-pixelaligned,
  title     = {{Pixel-Aligned Language Model}},
  author    = {Xu, Jiarui and Zhou, Xingyi and Yan, Shen and Gu, Xiuye and Arnab, Anurag and Sun, Chen and Wang, Xiaolong and Schmid, Cordelia},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {13030-13039},
  url       = {https://mlanthology.org/cvpr/2024/xu2024cvpr-pixelaligned/}
}