Soft Reasoning: Navigating Solution Spaces in Large Language Models Through Controlled Embedding Exploration

Abstract

Large Language Models (LLMs) struggle with complex reasoning due to limited diversity and inefficient search. We propose Soft Reasoning, an embedding-based search framework that optimises the embedding of the first token to guide generation. It combines (1) embedding perturbation for controlled exploration and (2) Bayesian optimisation to refine embeddings via a verifier-guided objective, balancing exploration and exploitation. This approach improves reasoning accuracy and coherence while avoiding reliance on heuristic search. Experiments demonstrate superior correctness with minimal computation, making it a scalable, model-agnostic solution.

Cite

Text

Zhu et al. "Soft Reasoning: Navigating Solution Spaces in Large Language Models Through Controlled Embedding Exploration." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Zhu et al. "Soft Reasoning: Navigating Solution Spaces in Large Language Models Through Controlled Embedding Exploration." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/zhu2025icml-soft/)

BibTeX

@inproceedings{zhu2025icml-soft,
  title     = {{Soft Reasoning: Navigating Solution Spaces in Large Language Models Through Controlled Embedding Exploration}},
  author    = {Zhu, Qinglin and Zhao, Runcong and Yan, Hanqi and He, Yulan and Chen, Yudong and Gui, Lin},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {80427-80447},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/zhu2025icml-soft/}
}