Diffusion Models Already Have a Semantic Latent Space
Abstract
Diffusion models achieve outstanding generative performance in various domains. Despite their great success, they lack semantic latent space which is essential for controlling the generative process. To address the problem, we propose asymmetric reverse process (Asyrp) which discovers the semantic latent space in frozen pretrained diffusion models. Our semantic latent space, named h-space, has nice properties for accommodating semantic image manipulation: homogeneity, linearity, robustness, and consistency across timesteps. In addition, we measure editing strength and quality deficiency of a generative process at timesteps to provide a principled design of the process for versatility and quality improvements. Our method is applicable to various architectures (DDPM++, iDDPM, and ADM) and datasets (CelebA-HQ, AFHQ-dog, LSUN-church, LSUN-bedroom, and METFACES).
Cite
Text
Kwon et al. "Diffusion Models Already Have a Semantic Latent Space." International Conference on Learning Representations, 2023.Markdown
[Kwon et al. "Diffusion Models Already Have a Semantic Latent Space." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/kwon2023iclr-diffusion/)BibTeX
@inproceedings{kwon2023iclr-diffusion,
title = {{Diffusion Models Already Have a Semantic Latent Space}},
author = {Kwon, Mingi and Jeong, Jaeseok and Uh, Youngjung},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/kwon2023iclr-diffusion/}
}