Hints of Prompt: Enhancing Visual Representation for Multimodal LLMs in Autonomous Driving
Abstract
In light of the dynamic nature of autonomous driving environments and stringent safety requirements, general MLLMs combined with CLIP alone often struggle to accurately represent driving-specific scenarios, particularly in complex interactions and long-tail cases. To address this, we propose the Hints of Prompt (HoP) framework, which introduces three key enhancements: Affinity hint to emphasize instance-level structure by strengthening token-wise connections, Semantic hint to incorporate high-level information relevant to driving-specific cases, such as complex interactions among vehicles and traffic signs, and Question hint to align visual features with the query context, focusing on question-relevant regions. These hints are fused through a Hint Fusion module, enriching visual representations by capturing driving-related representations with limited domain data, ensuring faster adaptation to driving scenarios. Extensive experiments confirm the effectiveness of the HoP framework, showing that it significantly outperforms previous state-of-the-art methods in all key metrics.
Cite
Text
Zhou et al. "Hints of Prompt: Enhancing Visual Representation for Multimodal LLMs in Autonomous Driving." International Conference on Computer Vision, 2025.Markdown
[Zhou et al. "Hints of Prompt: Enhancing Visual Representation for Multimodal LLMs in Autonomous Driving." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/zhou2025iccv-hints/)BibTeX
@inproceedings{zhou2025iccv-hints,
title = {{Hints of Prompt: Enhancing Visual Representation for Multimodal LLMs in Autonomous Driving}},
author = {Zhou, Hao and Gao, Zhanning and Chen, Zhili and Ye, Maosheng and Chen, Qifeng and Cao, Tongyi and Qi, Honggang},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {6165-6175},
url = {https://mlanthology.org/iccv/2025/zhou2025iccv-hints/}
}