Bias Mitigation in Graph Diffusion Models
Abstract
Most existing graph diffusion models have significant bias problems. We observe that the forward diffusion’s maximum perturbation distribution in most models deviates from the standard Gaussian distribution, while reverse sampling consistently starts from a standard Gaussian distribution, which results in a reverse-starting bias. Together with the inherent exposure bias of diffusion models, this results in degraded generation quality. This paper proposes a comprehensive approach to mitigate both biases. To mitigate reverse-starting bias, we employ a newly designed Langevin sampling algorithm to align with the forward maximum perturbation distribution, establishing a new reverse-starting point. To address the exposure bias, we introduce a score correction mechanism based on a newly defined score difference. Our approach, which requires no network modifications, is validated across multiple models, datasets, and tasks, achieving state-of-the-art results.
Cite
Text
Yu and Zhan. "Bias Mitigation in Graph Diffusion Models." International Conference on Learning Representations, 2025.Markdown
[Yu and Zhan. "Bias Mitigation in Graph Diffusion Models." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/yu2025iclr-bias/)BibTeX
@inproceedings{yu2025iclr-bias,
title = {{Bias Mitigation in Graph Diffusion Models}},
author = {Yu, Meng and Zhan, Kun},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/yu2025iclr-bias/}
}