MeshLLM: Empowering Large Language Models to Progressively Understand and Generate 3D Mesh
Abstract
We present MeshLLM, a novel framework that leverages large language models (LLMs) to understand and generate text-serialized 3D meshes. Our approach addresses key limitations in existing methods, including the limited dataset scale when catering to LLMs' token length and the loss of 3D structural information during mesh serialization. We introduce a Primitive-Mesh decomposition strategy, which divides 3D meshes into structurally meaningful subunits. This enables the creation of a large-scale dataset with 1500k+ samples, almost 50x larger than previous methods, which aligns better with the LLM scaling law principles. Furthermore, we propose inferring face connectivity from vertices and local mesh assembly training strategies, significantly enhancing the LLMs' ability to capture mesh topology and spatial structures. Experiments show that MeshLLM outperforms the state-of-the-art LLaMA-Mesh in both mesh generation quality and shape understanding, highlighting its great potential in processing text-serialized 3D meshes.
Cite
Text
Fang et al. "MeshLLM: Empowering Large Language Models to Progressively Understand and Generate 3D Mesh." International Conference on Computer Vision, 2025.Markdown
[Fang et al. "MeshLLM: Empowering Large Language Models to Progressively Understand and Generate 3D Mesh." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/fang2025iccv-meshllm/)BibTeX
@inproceedings{fang2025iccv-meshllm,
title = {{MeshLLM: Empowering Large Language Models to Progressively Understand and Generate 3D Mesh}},
author = {Fang, Shuangkang and Shen, I-Chao and Wang, Yufeng and Tsai, Yi-Hsuan and Yang, Yi and Zhou, Shuchang and Ding, Wenrui and Igarashi, Takeo and Yang, Ming-Hsuan},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {14061-14072},
url = {https://mlanthology.org/iccv/2025/fang2025iccv-meshllm/}
}