X-Fusion: Introducing New Modality to Frozen Large Language Models
Abstract
We propose X-Fusion, a framework that extends pretrained Large Language Models (LLMs) for multimodal tasks while preserving their language capabilities. X-Fusion employs a dual-tower design with modality-specific weights, keeping the LLM's parameters frozen while integrating vision-specific information for both understanding and generation. We find that incorporating understanding-focused data improves generation quality, reducing image data noise enhances overall performance, and feature alignment accelerates convergence for smaller models but has minimal impact on larger ones. Our findings provide valuable insights into building efficient unified multimodal models.
Cite
Text
Mo et al. "X-Fusion: Introducing New Modality to Frozen Large Language Models." International Conference on Computer Vision, 2025.Markdown
[Mo et al. "X-Fusion: Introducing New Modality to Frozen Large Language Models." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/mo2025iccv-xfusion/)BibTeX
@inproceedings{mo2025iccv-xfusion,
title = {{X-Fusion: Introducing New Modality to Frozen Large Language Models}},
author = {Mo, Sicheng and Nguyen, Thao and Huang, Xun and Iyer, Siddharth Srinivasan and Li, Yijun and Liu, Yuchen and Tandon, Abhishek and Shechtman, Eli and Singh, Krishna Kumar and Lee, Yong Jae and Zhou, Bolei and Li, Yuheng},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {228-238},
url = {https://mlanthology.org/iccv/2025/mo2025iccv-xfusion/}
}