Next Semantic Scale Prediction via Hierarchical Diffusion Language Models

Abstract

In this paper we introduce Hierarchical Diffusion Language Models (HDLM) -- a novel family of discrete diffusion models for language modeling. HDLM builds on a hierarchical vocabulary where low-level tokens with detailed semantics are surjectively mapped to high-level tokens with coarse-grained meanings. In the forward process, each token is independently perturbed to its higher-level ancestor with more abstract semantics according to the scheduler, while in the reverse process the model progressively predicts the next, more detailed semantics. Taken together, HDLM provides a general time-varying next semantic scale prediction process for language modeling. We derive closed-form expressions for the diffusion Evidence Lower Bound (ELBO), and show that HDLM can be implemented in a flexible manner while including the existing MDLM as a special case. We also propose practical training techniques based on the insights. Extensive text generation experiments validate the effectiveness of HDLM, which demonstrates consistently lower validation and generative perplexity than baselines.

Cite

Text

Zhou et al. "Next Semantic Scale Prediction via Hierarchical Diffusion Language Models." Advances in Neural Information Processing Systems, 2025.

Markdown

[Zhou et al. "Next Semantic Scale Prediction via Hierarchical Diffusion Language Models." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/zhou2025neurips-next/)

BibTeX

@inproceedings{zhou2025neurips-next,
  title     = {{Next Semantic Scale Prediction via Hierarchical Diffusion Language Models}},
  author    = {Zhou, Cai and Wang, Chenyu and Zhang, Dinghuai and Tong, Shangyuan and Wang, Yifei and Bates, Stephen and Jaakkola, Tommi},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/zhou2025neurips-next/}
}