Material Anything: Generating Materials for Any 3D Object via Diffusion

Abstract

We present **Material Anything**, a fully-automated, unified diffusion framework designed to generate physically-based materials for 3D objects. Unlike existing methods that rely on complex pipelines or case-specific optimizations, Material Anything offers a robust, end-to-end solution adaptable to objects under diverse lighting conditions. Our approach leverages a pre-trained image diffusion model, enhanced with a triple-head architecture and rendering loss to improve stability and material quality. Additionally, we introduce confidence masks as a dynamic switcher within the diffusion model, enabling it to effectively handle both textured and texture-less objects across varying lighting conditions. By employing a progressive material generation strategy guided by these confidence masks, along with a UV-space material refiner, our method ensures consistent, UV-ready material outputs. Extensive experiments demonstrate our approach outperforms existing methods across a wide range of object categories and lighting conditions.

Cite

Text

Huang et al. "Material Anything: Generating Materials for Any 3D Object via Diffusion." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.02473

Markdown

[Huang et al. "Material Anything: Generating Materials for Any 3D Object via Diffusion." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/huang2025cvpr-material/) doi:10.1109/CVPR52734.2025.02473

BibTeX

@inproceedings{huang2025cvpr-material,
  title     = {{Material Anything: Generating Materials for Any 3D Object via Diffusion}},
  author    = {Huang, Xin and Wang, Tengfei and Liu, Ziwei and Wang, Qing},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {26556-26565},
  doi       = {10.1109/CVPR52734.2025.02473},
  url       = {https://mlanthology.org/cvpr/2025/huang2025cvpr-material/}
}