Learning Data Representations with Joint Diffusion Models

Abstract

Joint machine learning models that allow synthesizing and classifying data often offer uneven performance between those tasks or are unstable to train. In this work, we depart from a set of empirical observations that indicate the usefulness of internal representations built by contemporary deep diffusion-based generative models not only for generating but also predicting. We then propose to extend the vanilla diffusion model with a classifier that allows for stable joint end-to-end training with shared parameterization between those objectives. The resulting joint diffusion model outperforms recent state-of-the-art hybrid methods in terms of both classification and generation quality on all evaluated benchmarks. On top of our joint training approach, we present how we can directly benefit from shared generative and discriminative representations by introducing a method for visual counterfactual explanations.

Cite

Text

Deja et al. "Learning Data Representations with Joint Diffusion Models." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2023. doi:10.1007/978-3-031-43415-0_32

Markdown

[Deja et al. "Learning Data Representations with Joint Diffusion Models." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2023.](https://mlanthology.org/ecmlpkdd/2023/deja2023ecmlpkdd-learning/) doi:10.1007/978-3-031-43415-0_32

BibTeX

@inproceedings{deja2023ecmlpkdd-learning,
  title     = {{Learning Data Representations with Joint Diffusion Models}},
  author    = {Deja, Kamil and Trzcinski, Tomasz and Tomczak, Jakub M.},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2023},
  pages     = {543-559},
  doi       = {10.1007/978-3-031-43415-0_32},
  url       = {https://mlanthology.org/ecmlpkdd/2023/deja2023ecmlpkdd-learning/}
}