Controlling Multimodal LLMs via Reward-Guided Decoding

Abstract

As Multimodal Large Language Models (MLLMs) gain widespread applicability, it is becoming increasingly desirable to adapt them for diverse user needs. In this paper, we study the adaptation of MLLMs through controlled decoding. To achieve this, we introduce the first method for reward-guided decoding of MLLMs and demonstrate its application in improving their visual grounding. Our method involves learning a reward model for visual grounding and using it to guide the MLLM's decoding process. Our approach enables on-the-fly controllability of an MLLM's inference process in two ways: first, by giving control over the relative importance of reward and output likelihood during decoding, allowing a user to dynamically trade off object precision and recall in image captioning tasks; second, by giving control over the breadth of the search during decoding, allowing a user to trade off compute for output quality. We evaluate our method on standard object hallucination benchmarks, showing that it provides significant controllability over MLLM inference, while matching or outperforming existing visual grounding methods.

Cite

Text

Mañas et al. "Controlling Multimodal LLMs via Reward-Guided Decoding." NeurIPS 2024 Workshops: AFM, 2024.

Markdown

[Mañas et al. "Controlling Multimodal LLMs via Reward-Guided Decoding." NeurIPS 2024 Workshops: AFM, 2024.](https://mlanthology.org/neuripsw/2024/manas2024neuripsw-controlling/)

BibTeX

@inproceedings{manas2024neuripsw-controlling,
  title     = {{Controlling Multimodal LLMs via Reward-Guided Decoding}},
  author    = {Mañas, Oscar and D'Oro, Pierluca and Sinha, Koustuv and Romero-Soriano, Adriana and Drozdzal, Michal and Agrawal, Aishwarya},
  booktitle = {NeurIPS 2024 Workshops: AFM},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/manas2024neuripsw-controlling/}
}