SELF-IMAGINE: Effective Unimodal Reasoning with Multimodal Models Using Self-Imagination
Abstract
The potential of Vision-Language Models ({\vlm}s) often remains underutilized in handling complex text-based problems, particularly when these problems could benefit from visual representation. Resonating with humans' ability to solve complex text-based problems by (1) creating a visual diagram from the problem and (2) deducing what steps they need to take to solve it, we propose \ours. We leverage a single Vision-Language Model (\vlm) to generate a structured representation of the question using HTML, then render the HTML as an image, and finally use the same \vlm to answer the question using both the question and the image. Our approach does not require any additional training data or training. We evaluate our approach in three mathematics tasks and nine general-purpose reasoning tasks using state-of-the-art (\llava and \gemini) {\vlm}s. Our approach boosts the performance of \vlm on all math tasks (on average \gsm: +3.145\%; \asdiv: +3.25\%; \svamp: +6.90\%) and the majority of the general-purpose reasoning tasks by 3.20\% to 6.00\% on average.
Cite
Text
Akter et al. "SELF-IMAGINE: Effective Unimodal Reasoning with Multimodal Models Using Self-Imagination." ICLR 2024 Workshops: LLMAgents, 2024.Markdown
[Akter et al. "SELF-IMAGINE: Effective Unimodal Reasoning with Multimodal Models Using Self-Imagination." ICLR 2024 Workshops: LLMAgents, 2024.](https://mlanthology.org/iclrw/2024/akter2024iclrw-selfimagine/)BibTeX
@inproceedings{akter2024iclrw-selfimagine,
title = {{SELF-IMAGINE: Effective Unimodal Reasoning with Multimodal Models Using Self-Imagination}},
author = {Akter, Syeda Nahida and Madaan, Aman and Lee, Sangwu and Yang, Yiming and Nyberg, Eric},
booktitle = {ICLR 2024 Workshops: LLMAgents},
year = {2024},
url = {https://mlanthology.org/iclrw/2024/akter2024iclrw-selfimagine/}
}