MaTe: Images Are All You Need for Material Transfer via Diffusion Transformer

Abstract

Recent diffusion-based methods for material transfer rely on image fine-tuning or complex architectures with assistive networks, but face challenges including text dependency, extra computational costs, and feature misalignment. To address these limitations, we propose MaTe, a streamlined diffusion framework that eliminates textual guidance and reference networks. MaTe integrates input images at the token level, enabling unified processing via multi-modal attention in a shared latent space. This design removes the need for additional adapters, ControlNet, inversion sampling, or model fine-tuning. Extensive experiments demonstrate that MaTe achieves high-quality material generation under a zero-shot, training-free paradigm. It outperforms state-of-the-art methods in both visual quality and efficiency while preserving precise detail alignment, significantly simplifying inference prerequisites.

Cite

Text

Huang et al. "MaTe: Images Are All You Need for Material Transfer via Diffusion Transformer." International Conference on Computer Vision, 2025.

Markdown

[Huang et al. "MaTe: Images Are All You Need for Material Transfer via Diffusion Transformer." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/huang2025iccv-mate/)

BibTeX

@inproceedings{huang2025iccv-mate,
  title     = {{MaTe: Images Are All You Need for Material Transfer via Diffusion Transformer}},
  author    = {Huang, Nisha and Liu, Henglin and Lin, Yizhou and Huang, Kaer and Chen, Chubin and Guo, Jie and Lee, Tong-yee and Li, Xiu},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {15117-15126},
  url       = {https://mlanthology.org/iccv/2025/huang2025iccv-mate/}
}