LoRAverse: A Submodular Framework to Retrieve Diverse Adapters for Diffusion Models

Abstract

Low-rank Adaptation (LoRA) models have revolutionized the personalization of pre-trained diffusion models by enabling fine-tuning through low-rank, factorized weight matrices specifically optimized for attention layers. These models facilitate the generation of highly customized content across a variety of objects, individuals, and artistic styles without the need for extensive retraining. Despite the availability of over 100K LoRA adapters on platforms like Civit.ai, users often face challenges in navigating, selecting, and effectively utilizing the most suitable adapters due to their sheer volume, diversity, and lack of structured organization. This paper addresses the problem of selecting the most relevant and diverse LoRA models from this vast database by framing the task as a combinatorial optimization problem and proposing a novel submodular framework. Our quantitative and qualitative experiments demonstrate that our method generates diverse outputs across a wide range of domains.

Cite

Text

Sonmezer et al. "LoRAverse: A Submodular Framework to Retrieve Diverse Adapters for Diffusion Models." International Conference on Computer Vision, 2025.

Markdown

[Sonmezer et al. "LoRAverse: A Submodular Framework to Retrieve Diverse Adapters for Diffusion Models." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/sonmezer2025iccv-loraverse/)

BibTeX

@inproceedings{sonmezer2025iccv-loraverse,
  title     = {{LoRAverse: A Submodular Framework to Retrieve Diverse Adapters for Diffusion Models}},
  author    = {Sonmezer, Mert and Zheng, Matthew and Yanardag, Pinar},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {17879-17888},
  url       = {https://mlanthology.org/iccv/2025/sonmezer2025iccv-loraverse/}
}