Text Diffusion with Reinforced Conditioning
Abstract
Diffusion models have demonstrated exceptional capability in generating high-quality images, videos, and audio. Due to their adaptiveness in iterative refinement, they provide a strong potential for achieving better non-autoregressive sequence generation. However, existing text diffusion models still fall short in their performance due to a challenge in handling the discreteness of language. This paper thoroughly analyzes text diffusion models and uncovers two significant limitations: degradation of self-conditioning during training and misalignment between training and sampling. Motivated by our findings, we propose a novel Text Diffusion model called TReC, which mitigates the degradation with Reinforced Conditioning and the misalignment by Time-Aware Variance Scaling. Our extensive experiments demonstrate the competitiveness of TReC against autoregressive, non-autoregressive, and diffusion baselines. Moreover, qualitative analysis shows its advanced ability to fully utilize the diffusion process in refining samples.
Cite
Text
Liu et al. "Text Diffusion with Reinforced Conditioning." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I12.29316Markdown
[Liu et al. "Text Diffusion with Reinforced Conditioning." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/liu2024aaai-text/) doi:10.1609/AAAI.V38I12.29316BibTeX
@inproceedings{liu2024aaai-text,
title = {{Text Diffusion with Reinforced Conditioning}},
author = {Liu, Yuxuan and Yang, Tianchi and Huang, Shaohan and Zhang, Zihan and Huang, Haizhen and Wei, Furu and Deng, Weiwei and Sun, Feng and Zhang, Qi},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {14069-14077},
doi = {10.1609/AAAI.V38I12.29316},
url = {https://mlanthology.org/aaai/2024/liu2024aaai-text/}
}