VLM Q-Learning: Aligning Vision-Language Models for Interactive Decision-Making
Abstract
Recent research looks to harness the general knowledge and reasoning of large language models (LLMs) into agents that accomplish user-specified goals in interactive environments. Vision-language models (VLMs) extend LLMs to multi-modal data and provide agents with the visual reasoning necessary for new applications in areas such as computer automation. However, agent tasks emphasize skills where accessible open-weight VLMs lag behind their LLM equivalents. For example, VLMs are less capable of following an environment's strict output syntax requirements and are more focused on open-ended question answering. Overcoming these limitations requires supervised fine-tuning (SFT) on task-specific expert demonstrations. Our work approaches these challenges from an offline-to-online reinforcement learning (RL) perspective. RL lets us fine-tune VLMs to agent tasks while learning from the unsuccessful decisions of our own model or more capable (larger) models. We explore an off-policy RL solution that retains the stability and simplicity of the widely used SFT workflow while allowing our agent to self-improve and learn from low-quality datasets. We demonstrate this technique with two open-weight VLMs across three multi-modal agent domains.
Cite
Text
Grigsby et al. "VLM Q-Learning: Aligning Vision-Language Models for Interactive Decision-Making." ICLR 2025 Workshops: SSI-FM, 2025.Markdown
[Grigsby et al. "VLM Q-Learning: Aligning Vision-Language Models for Interactive Decision-Making." ICLR 2025 Workshops: SSI-FM, 2025.](https://mlanthology.org/iclrw/2025/grigsby2025iclrw-vlm/)BibTeX
@inproceedings{grigsby2025iclrw-vlm,
title = {{VLM Q-Learning: Aligning Vision-Language Models for Interactive Decision-Making}},
author = {Grigsby, Jake and Zhu, Yuke and Ryoo, Michael S and Niebles, Juan Carlos},
booktitle = {ICLR 2025 Workshops: SSI-FM},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/grigsby2025iclrw-vlm/}
}