Context Is Key: A Benchmark for Forecasting with Essential Textual Information

Abstract

Forecasting is a critical task in decision-making across numerous domains. While historical numerical data provide a start, they fail to convey the complete context for reliable and accurate predictions. Human forecasters frequently rely on additional information, such as background knowledge and constraints, which can efficiently be communicated through natural language. However, in spite of recent progress with LLM-based forecasters, their ability to effectively integrate this textual information remains an open question. To address this, we introduce "Context is Key" (CiK), a time-series forecasting benchmark that pairs numerical data with diverse types of carefully crafted textual context, requiring models to integrate both modalities; crucially, every task in CiK requires understanding textual context to be solved successfully. We evaluate a range of approaches, including statistical models, time series foundation models, and LLM-based forecasters, and propose a simple yet effective LLM prompting method that outperforms all other tested methods on our benchmark. Our experiments highlight the importance of incorporating contextual information, demonstrate surprising performance when using LLM-based forecasting models, and also reveal some of their critical shortcomings. This benchmark aims to advance multimodal forecasting by promoting models that are both accurate and accessible to decision-makers with varied technical expertise. The benchmark can be visualized at https://servicenow.github.io/context-is-key-forecasting/v0.

Cite

Text

Williams et al. "Context Is Key: A Benchmark for Forecasting with Essential Textual Information." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Williams et al. "Context Is Key: A Benchmark for Forecasting with Essential Textual Information." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/williams2025icml-context/)

BibTeX

@inproceedings{williams2025icml-context,
  title     = {{Context Is Key: A Benchmark for Forecasting with Essential Textual Information}},
  author    = {Williams, Andrew Robert and Ashok, Arjun and Marcotte, Étienne and Zantedeschi, Valentina and Subramanian, Jithendaraa and Riachi, Roland and Requeima, James and Lacoste, Alexandre and Rish, Irina and Chapados, Nicolas and Drouin, Alexandre},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {66887-66944},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/williams2025icml-context/}
}