Outcome-Based Reinforcement Learning to Predict the Future

Abstract

Reinforcement Learning with Verifiable Rewards (RLVR) has been an effective approach for improving Large Language Models' reasoning in domains such as coding and mathematics. Here, we apply RLVR methods towards forecasting future real-world events – a challenging task for RL due to the very noisy (and delayed) outcomes involved. Using a novel dataset of recent questions from a prediction market, and accompanying relevant news headlines, we show that a compact (14B) reasoning model can be trained to match or surpass the predictive accuracy of frontier models like o1, while greatly improving probabilistic calibration. The model's performance is also practically meaningful: in a Polymarket trading simulation, we estimate that its bets would have yielded a return on investment of over 10\% across all questions in the test set. We detail and compare approaches used in training our model, including augmenting our training-data with synthetic prediction questions, guardrails for learning stability, and median prediction sampling at inference-time.

Cite

Text

Turtel et al. "Outcome-Based Reinforcement Learning to Predict the Future." Transactions on Machine Learning Research, 2025.

Markdown

[Turtel et al. "Outcome-Based Reinforcement Learning to Predict the Future." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/turtel2025tmlr-outcomebased/)

BibTeX

@article{turtel2025tmlr-outcomebased,
  title     = {{Outcome-Based Reinforcement Learning to Predict the Future}},
  author    = {Turtel, Benjamin and Franklin, Danny and Skotheim, Kris and Hewitt, Luke and Schoenegger, Philipp},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/turtel2025tmlr-outcomebased/}
}