MissDiff: Training Diffusion Models on Tabular Data with Missing Values

Abstract

Diffusion models have shown remarkable performance in modeling data distributions and synthesizing data. The vanilla diffusion model typically requires complete or fully observed training data, while incomplete data is a common issue in various real-world applications, particularly in tabular data. This work presents a unified and principled diffusion-based framework for learning from data with missing values under various missing mechanisms. We first observe that the widely adopted ``impute-then-generate'' pipeline may lead to a biased learning objective. Then we propose to mask the regression loss of Denoising Score Matching in the training phase. We show that the proposed method is consistent in learning the score of data distributions, and the training objective serves as an upper bound for the negative likelihood in certain cases. The proposed framework is evaluated on multiple tabular datasets using realistic and efficacious metrics. It is demonstrated to outperform several baseline methods by a large margin.

Cite

Text

Ouyang et al. "MissDiff: Training Diffusion Models on Tabular Data with Missing Values." ICML 2023 Workshops: SPIGM, 2023.

Markdown

[Ouyang et al. "MissDiff: Training Diffusion Models on Tabular Data with Missing Values." ICML 2023 Workshops: SPIGM, 2023.](https://mlanthology.org/icmlw/2023/ouyang2023icmlw-missdiff/)

BibTeX

@inproceedings{ouyang2023icmlw-missdiff,
  title     = {{MissDiff: Training Diffusion Models on Tabular Data with Missing Values}},
  author    = {Ouyang, Yidong and Xie, Liyan and Li, Chongxuan and Cheng, Guang},
  booktitle = {ICML 2023 Workshops: SPIGM},
  year      = {2023},
  url       = {https://mlanthology.org/icmlw/2023/ouyang2023icmlw-missdiff/}
}