Controllable Generative Modeling via Causal Reasoning

Abstract

Deep latent variable generative models excel at generating complex, high-dimensional data, often exhibiting impressive generalization beyond the training distribution. However, many such models in use today are black-boxes trained on large unlabelled datasets with statistical objectives and lack an interpretable understanding of the latent space required for controlling the generative process. We propose CAGE, a framework for controllable generation in latent variable models based on causal reasoning. Given a pair of attributes, CAGE infers the implicit cause-effect relationships between these attributes as induced by a deep generative model. This is achieved by defining and estimating a novel notion of unit-level causal effects in the latent space of the generative model. Thereafter, we use the inferred cause-effect relationships to design a novel strategy for controllable generation based on counterfactual sampling. Through a series of large-scale synthetic and human evaluations, we demonstrate that generating counterfactual samples which respect the underlying causal relationships inferred via CAGE leads to subjectively more realistic images.

Cite

Text

Bose et al. "Controllable Generative Modeling via Causal Reasoning." Transactions on Machine Learning Research, 2022.

Markdown

[Bose et al. "Controllable Generative Modeling via Causal Reasoning." Transactions on Machine Learning Research, 2022.](https://mlanthology.org/tmlr/2022/bose2022tmlr-controllable/)

BibTeX

@article{bose2022tmlr-controllable,
  title     = {{Controllable Generative Modeling via Causal Reasoning}},
  author    = {Bose, Joey and Monti, Ricardo Pio and Grover, Aditya},
  journal   = {Transactions on Machine Learning Research},
  year      = {2022},
  url       = {https://mlanthology.org/tmlr/2022/bose2022tmlr-controllable/}
}