LoRAShop: Training-Free Multi-Concept Image Generation and Editing with Rectified Flow Transformers

Abstract

We introduce LoRAShop, the first framework for multi-concept image generation and editing with LoRA models. LoRAShop builds on a key observation about the feature interaction patterns inside Flux-style diffusion transformers: concept-specific transformer features activate spatially coherent regions early in the denoising process. We harness this observation to derive a disentangled latent mask for each concept in a prior forward pass and blend the corresponding LoRA weights only within regions bounding the concepts to be personalized. The resulting edits seamlessly integrate multiple subjects or styles into the original scene while preserving global context, lighting, and fine details. Our experiments demonstrate that LoRAShop delivers better identity preservation compared to baselines. By eliminating retraining and external constraints, LoRAShop turns personalized diffusion models into a practical `photoshop-with-LoRAs' tool and opens new avenues for compositional visual storytelling and rapid creative iteration.

Cite

Text

Dalva et al. "LoRAShop: Training-Free Multi-Concept Image Generation and Editing with Rectified Flow Transformers." Advances in Neural Information Processing Systems, 2025.

Markdown

[Dalva et al. "LoRAShop: Training-Free Multi-Concept Image Generation and Editing with Rectified Flow Transformers." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/dalva2025neurips-lorashop/)

BibTeX

@inproceedings{dalva2025neurips-lorashop,
  title     = {{LoRAShop: Training-Free Multi-Concept Image Generation and Editing with Rectified Flow Transformers}},
  author    = {Dalva, Yusuf and Yesiltepe, Hidir and Yanardag, Pinar},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/dalva2025neurips-lorashop/}
}