HybridBooth: Hybrid Prompt Inversion for Efficient Subject-Driven Generation
Abstract
Recent advancements in text-to-image diffusion models have shown remarkable creative capabilities with textual prompts, but generating personalized instances based on specific subjects, known as subject-driven generation, remains challenging. To tackle this issue, we present a new hybrid framework called , which merges the benefits of optimization-based and direct-regression methods. operates in two stages: the Word Embedding Probe, which generates a robust initial word embedding using a fine-tuned encoder, and the Word Embedding Refinement, which further adapts the encoder to specific subject images by optimizing key parameters. This approach allows for effective and fast inversion of visual concepts into textual embedding, even from a single image, while maintaining the model’s generalization capabilities.
Cite
Text
Guan et al. "HybridBooth: Hybrid Prompt Inversion for Efficient Subject-Driven Generation." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72673-6_22Markdown
[Guan et al. "HybridBooth: Hybrid Prompt Inversion for Efficient Subject-Driven Generation." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/guan2024eccv-hybridbooth/) doi:10.1007/978-3-031-72673-6_22BibTeX
@inproceedings{guan2024eccv-hybridbooth,
title = {{HybridBooth: Hybrid Prompt Inversion for Efficient Subject-Driven Generation}},
author = {Guan, Shanyan and Ge, Yanhao and Tai, Ying and Yang, Jian and Li, Wei and You, Mingyu},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-72673-6_22},
url = {https://mlanthology.org/eccv/2024/guan2024eccv-hybridbooth/}
}