DISC: Dynamic Decomposition Improves LLM Inference Scaling

Abstract

Inference scaling methods often rely on decomposing problems into steps, followed by sampling and selecting the best next steps. However, these steps and their sizes are typically fixed or depend on domain knowledge. We propose dynamic decomposition, a method that adaptively and automatically breaks down solution and reasoning traces into manageable steps during inference. By allocating compute more effectively—particularly by subdividing challenging steps and sampling them more frequently—dynamic decomposition significantly enhances inference efficiency. Experiments on benchmarks such as APPS, MATH, and LiveCodeBench demonstrate that dynamic decomposition outperforms static approaches, including token-level, sentence-level, and single-step decompositions. These findings highlight the potential of dynamic decomposition to improve a wide range of inference scaling techniques.

Cite

Text

Light et al. "DISC: Dynamic Decomposition Improves LLM Inference Scaling." ICLR 2025 Workshops: DL4C, 2025.

Markdown

[Light et al. "DISC: Dynamic Decomposition Improves LLM Inference Scaling." ICLR 2025 Workshops: DL4C, 2025.](https://mlanthology.org/iclrw/2025/light2025iclrw-disc/)

BibTeX

@inproceedings{light2025iclrw-disc,
  title     = {{DISC: Dynamic Decomposition Improves LLM Inference Scaling}},
  author    = {Light, Jonathan and Cheng, Wei and Wu, Yue and Oyamada, Masafumi and Wang, Mengdi and Paternain, Santiago and Chen, Haifeng},
  booktitle = {ICLR 2025 Workshops: DL4C},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/light2025iclrw-disc/}
}