Flow-of-Options: Diversified and Improved LLM Reasoning by Thinking Through Options
Abstract
We present a novel reasoning approach called Flow-of-Options (FoO), designed to address intrinsic biases in Large Language Models (LLMs). Flow-of-Options enables LLMs to systematically explore a diverse range of possibilities in their reasoning, as demonstrated by an FoO-based agentic framework developed for autonomously solving Machine Learning (ML) tasks. FoO enforces diversity in LLM solutions through compressed and interpretable task representations, resulting in improvements of 38.2% - 69.2% on standard data science tasks, and 37.4% - 47.9% on therapeutic chemistry tasks, as compared to state-of-the-art baselines. With an overall operation cost under $1 per task, our framework is well-suited for cost-sensitive applications. Going beyond tabular classification and regression, we show the broader applicability of our FoO-based agentic system to tasks such as reinforcement learning and image generation. Our code is open-sourced at: https://github.com/flagshippioneering/Flow-of-Options.
Cite
Text
Nair et al. "Flow-of-Options: Diversified and Improved LLM Reasoning by Thinking Through Options." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Nair et al. "Flow-of-Options: Diversified and Improved LLM Reasoning by Thinking Through Options." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/nair2025icml-flowofoptions/)BibTeX
@inproceedings{nair2025icml-flowofoptions,
title = {{Flow-of-Options: Diversified and Improved LLM Reasoning by Thinking Through Options}},
author = {Nair, Lakshmi and Trase, Ian and Kim, J. Mark},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {45529-45560},
volume = {267},
url = {https://mlanthology.org/icml/2025/nair2025icml-flowofoptions/}
}