Graph4MM: Weaving Multimodal Learning with Structural Information

Abstract

Real-world multimodal data usually exhibit complex structural relationships beyond traditional one-to-one mappings like image-caption pairs. Entities across modalities interact in intricate ways, with images and text forming diverse interconnections through contextual dependencies and co-references. Graphs provide powerful structural information for modeling intra-modal and inter-modal relationships. However, previous works fail to distinguish multi-hop neighbors and treat the graph as a standalone modality, which fragments the overall understanding. This limitation presents two key challenges in multimodal learning: (1) integrating structural information from multi-hop neighbors into foundational models, and (2) fusing modality-specific information in a principled manner. To address these challenges, we revisit the role of graphs in multimodal learning within the era of foundation models and propose Graph4MM, a graph-based multimodal learning framework. To be specific, we introduce Hop-Diffused Attention, which integrates multi-hop structural information into self-attention through causal masking and hop diffusion. Furthermore, we design MM-QFormer, a multi-mapping querying transformer for cross-modal fusion. Through theoretical and empirical analysis, we show that leveraging structures to integrate both intra- and inter-modal interactions improves multimodal understanding beyond treating them as a standalone modality. Experiments on both generative and discriminative tasks show that Graph4MM outperforms larger VLMs, LLMs, and multimodal graph baselines, achieving a 6.93% average improvement.

Cite

Text

Ning et al. "Graph4MM: Weaving Multimodal Learning with Structural Information." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Ning et al. "Graph4MM: Weaving Multimodal Learning with Structural Information." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/ning2025icml-graph4mm/)

BibTeX

@inproceedings{ning2025icml-graph4mm,
  title     = {{Graph4MM: Weaving Multimodal Learning with Structural Information}},
  author    = {Ning, Xuying and Fu, Dongqi and Wei, Tianxin and Xu, Wujiang and He, Jingrui},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {46448-46472},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/ning2025icml-graph4mm/}
}