Grounding Language Models for Visual Entity Recognition
Abstract
We introduce , an Autoregressive model for Visual Entity Recognition. Our model extends an autoregressive Multimodal Large Language Model by employing retrieval augmented constrained generation. It mitigates low performance on out-of-domain entities while excelling in queries that require visual reasoning. Our method learns to distinguish similar entities within a vast label space by contrastively training on hard negative pairs in parallel with a sequence-to-sequence objective without an external retriever. During inference, a list of retrieved candidate answers explicitly guides language generation by removing invalid decoding paths. The proposed method achieves significant improvements across different dataset splits in the recently proposed benchmark with accuracy on the Entity split rising from 32.7% to 61.5%. It demonstrates superior performance on the and query splits by a substantial double-digit margin, while also preserving the ability to effectively transfer to other generic visual question answering benchmarks without further training.
Cite
Text
Xiao et al. "Grounding Language Models for Visual Entity Recognition." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73247-8_23Markdown
[Xiao et al. "Grounding Language Models for Visual Entity Recognition." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/xiao2024eccv-grounding/) doi:10.1007/978-3-031-73247-8_23BibTeX
@inproceedings{xiao2024eccv-grounding,
title = {{Grounding Language Models for Visual Entity Recognition}},
author = {Xiao, Zilin and Gong, Ming and Cascante-Bonilla, Paola and Zhang, Xingyao and Wu, Jie and Ordonez, Vicente},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-73247-8_23},
url = {https://mlanthology.org/eccv/2024/xiao2024eccv-grounding/}
}