Revelio: Interpreting and Leveraging Semantic Information in Diffusion Models

Abstract

We study how rich visual semantic information is represented within various layers and denoising timesteps of different diffusion architectures. We uncover monosemantic interpretable features by leveraging k-sparse autoencoders (k-SAE). We substantiate our mechanistic interpretations via transfer learning using light-weight classifiers on off-the-shelf diffusion models' features. On 4 datasets, we demonstrate the effectiveness of diffusion features for representation learning. We provide an in-depth analysis of how different diffusion architectures, pre-training datasets, and language model conditioning impacts visual representation granularity, inductive biases, and transfer learning capabilities. Our work is a critical step towards deepening interpretability of black-box diffusion models. Code and visualizations available at: https://github.com/revelio-diffusion/revelio

Cite

Text

Kim et al. "Revelio: Interpreting and Leveraging Semantic Information in Diffusion Models." International Conference on Computer Vision, 2025.

Markdown

[Kim et al. "Revelio: Interpreting and Leveraging Semantic Information in Diffusion Models." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/kim2025iccv-revelio/)

BibTeX

@inproceedings{kim2025iccv-revelio,
  title     = {{Revelio: Interpreting and Leveraging Semantic Information in Diffusion Models}},
  author    = {Kim, Dahye and Thomas, Xavier and Ghadiyaram, Deepti},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {4659-4669},
  url       = {https://mlanthology.org/iccv/2025/kim2025iccv-revelio/}
}