Concept Algebra for (Score-Based) Text-Controlled Generative Models
Abstract
This paper concerns the structure of learned representations in text-guided generative models, focusing on score-based models. A key property of such models is that they can compose disparate concepts in a 'disentangled' manner.This suggests these models have internal representations that encode concepts in a 'disentangled' manner. Here, we focus on the idea that concepts are encoded as subspaces of some representation space. We formalize what this means, show there's a natural choice for the representation, and develop a simple method for identifying the part of the representation corresponding to a given concept. In particular, this allows us to manipulate the concepts expressed by the model through algebraic manipulation of the representation. We demonstrate the idea with examples using Stable Diffusion.
Cite
Text
Wang et al. "Concept Algebra for (Score-Based) Text-Controlled Generative Models." Neural Information Processing Systems, 2023.Markdown
[Wang et al. "Concept Algebra for (Score-Based) Text-Controlled Generative Models." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/wang2023neurips-concept/)BibTeX
@inproceedings{wang2023neurips-concept,
title = {{Concept Algebra for (Score-Based) Text-Controlled Generative Models}},
author = {Wang, Zihao and Gui, Lin and Negrea, Jeffrey and Veitch, Victor},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/wang2023neurips-concept/}
}