MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation

Abstract

In this paper, we present MoMA: an open-vocabulary, training-free personalized image model that boasts flexible zero-shot capabilities. As foundational text-to-image models rapidly evolve, the demand for robust image-to-image translation grows. Addressing this need, MoMA specializes in subject-driven personalized image generation. Utilizing an open-source, Multimodal Large Language Model (MLLM), we train MoMA to serve a dual role as both a feature extractor and a generator. This approach effectively synergizes reference image and text prompt information to produce valuable image features, facilitating an image diffusion model. To better leverage the generated features, we further introduce a novel self-attention shortcut method that efficiently transfers image features to an image diffusion model, improving the resemblance of the target object in generated images. Remarkably, as a tuning-free plug-and-play module, our model requires only a single reference image and outperforms existing methods in generating images with high detail fidelity, enhanced identity-preservation and prompt faithfulness. We commit to making our work open-source, thereby providing universal access to these advancements. Project page

Cite

Text

Song et al. "MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73661-2_7

Markdown

[Song et al. "MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/song2024eccv-moma/) doi:10.1007/978-3-031-73661-2_7

BibTeX

@inproceedings{song2024eccv-moma,
  title     = {{MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation}},
  author    = {Song, Kunpeng and Zhu, Yizhe and Liu, Bingchen and Yan, Qing and Elgammal, Ahmed and Yang, Xiao},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-73661-2_7},
  url       = {https://mlanthology.org/eccv/2024/song2024eccv-moma/}
}