PARASOL: Parametric Style Control for Diffusion Image Synthesis

Abstract

We propose PARASOL, a multi-modal synthesis model that enables disentangled, parametric control of the visual style of the image by jointly conditioning synthesis on both content and a fine-grained visual style embedding. We train a latent diffusion model (LDM) using specific losses for each modality and adapt the classifer-free guidance for encouraging disentangled control over independent content and style modalities at inference time. We leverage auxiliary semantic and style-based search to create training triplets for supervision of the LDM, ensuring complementarity of content and style cues. PARASOL shows promise for enabling nuanced control over visual style in diffusion models for image creation and stylization, as well as generative search where text-based search results may be adapted to more closely match user intent by interpolating both content and style descriptors.

Cite

Text

Tarres et al. "PARASOL: Parametric Style Control for Diffusion Image Synthesis." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024. doi:10.1109/CVPRW63382.2024.00250

Markdown

[Tarres et al. "PARASOL: Parametric Style Control for Diffusion Image Synthesis." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024.](https://mlanthology.org/cvprw/2024/tarres2024cvprw-parasol/) doi:10.1109/CVPRW63382.2024.00250

BibTeX

@inproceedings{tarres2024cvprw-parasol,
  title     = {{PARASOL: Parametric Style Control for Diffusion Image Synthesis}},
  author    = {Tarres, Gemma Canet and Ruta, Dan and Bui, Tu and Collomosse, John P.},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2024},
  pages     = {2432-2442},
  doi       = {10.1109/CVPRW63382.2024.00250},
  url       = {https://mlanthology.org/cvprw/2024/tarres2024cvprw-parasol/}
}