Multitwine: Multi-Object Compositing with Text and Layout Control
Abstract
We introduce the first generative model capable of simultaneous multi-object compositing, guided by both text and layout. Our model allows for the addition of multiple objects within a scene, capturing a range of interactions from simple positional relations (e.g., next to, in front of) to complex actions requiring reposing (e.g., hugging, playing guitar). When an interaction implies additional props, like 'taking a selfie', our model autonomously generates these supporting objects. By jointly training for compositing and subject-driven generation, also known as customization, we achieve a more balanced integration of textual and visual inputs for text-driven object compositing. As a result, we obtain a versatile model with state-of-the-art performance in both tasks. We further present a data generation pipeline leveraging visual and language models to effortlessly synthesize multimodal, aligned training data.
Cite
Text
Tarrés et al. "Multitwine: Multi-Object Compositing with Text and Layout Control." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.00758Markdown
[Tarrés et al. "Multitwine: Multi-Object Compositing with Text and Layout Control." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/tarres2025cvpr-multitwine/) doi:10.1109/CVPR52734.2025.00758BibTeX
@inproceedings{tarres2025cvpr-multitwine,
title = {{Multitwine: Multi-Object Compositing with Text and Layout Control}},
author = {Tarrés, Gemma Canet and Lin, Zhe and Zhang, Zhifei and Zhang, He and Gilbert, Andrew and Collomosse, John and Kim, Soo Ye},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2025},
pages = {8094-8104},
doi = {10.1109/CVPR52734.2025.00758},
url = {https://mlanthology.org/cvpr/2025/tarres2025cvpr-multitwine/}
}