An Analysis of Human Alignment of Latent Diffusion Models

Abstract

Diffusion models, trained on large amounts of data, showed remarkable performance for image synthesis. They have high error consistency with humans and low texture bias when used for classification. Furthermore, prior work demonstrated the decomposability of their bottleneck layer representations into semantic directions. In this work, we analyze how well such representations are aligned to human responses on a triplet odd-one-out task. We find that despite the aforementioned observations: I) The representational alignment with humans is comparable to that of models trained only on ImageNet-1k. II) The most aligned layers of the denoiser U-Net are intermediate layers and not the bottleneck. III) Text conditioning greatly improves alignment at high noise levels, hinting at the importance of abstract textual information, especially in the early stage of generation.

Cite

Text

Linhardt et al. "An Analysis of Human Alignment of Latent Diffusion Models." ICLR 2024 Workshops: Re-Align, 2024.

Markdown

[Linhardt et al. "An Analysis of Human Alignment of Latent Diffusion Models." ICLR 2024 Workshops: Re-Align, 2024.](https://mlanthology.org/iclrw/2024/linhardt2024iclrw-analysis/)

BibTeX

@inproceedings{linhardt2024iclrw-analysis,
  title     = {{An Analysis of Human Alignment of Latent Diffusion Models}},
  author    = {Linhardt, Lorenz and Morik, Marco and Bender, Sidney and Borras, Naima Elosegui},
  booktitle = {ICLR 2024 Workshops: Re-Align},
  year      = {2024},
  url       = {https://mlanthology.org/iclrw/2024/linhardt2024iclrw-analysis/}
}