CURIE: Evaluating LLMs on Multitask Scientific Long-Context Understanding and Reasoning

Abstract

Scientific problem-solving involves synthesizing information while applying expert knowledge. We introduce CURIE, a scientific long-Context Understanding, Reasoning, and Information Extraction benchmark to measure the potential of Large Language Models (LLMs) in scientific problem-solving and assisting scientists in realistic workflows. This benchmark introduces ten challenging tasks with a total of 580 problems and solution pairs curated by experts in six disciplines - materials science, condensed matter physics, quantum computing, geo-spatial analysis, biodiversity, and proteins - covering both experimental and theoretical work-flows in science. We evaluate a range of closed and open LLMs on tasks in CURIE which requires domain expertise, comprehension of long in-context information,and multi-step reasoning. While Gemini Flash 2.0 and Claude-3 show consistent high comprehension across domains, the popular GPT-4o and command-R+ fail dramatically on protein sequencing tasks. With the best performance at 32% there is much room for improvement for all models. We hope that insights gained from CURIE can guide the future development of LLMs in sciences. Links to the data and evaluation code are in https://github.com/google/curie

Cite

Text

Cui et al. "CURIE: Evaluating LLMs on Multitask Scientific Long-Context Understanding and Reasoning." International Conference on Learning Representations, 2025.

Markdown

[Cui et al. "CURIE: Evaluating LLMs on Multitask Scientific Long-Context Understanding and Reasoning." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/cui2025iclr-curie/)

BibTeX

@inproceedings{cui2025iclr-curie,
  title     = {{CURIE: Evaluating LLMs on Multitask Scientific Long-Context Understanding and Reasoning}},
  author    = {Cui, Hao and Shamsi, Zahra and Cheon, Gowoon and Ma, Xuejian and Li, Shutong and Tikhanovskaya, Maria and Norgaard, Peter Christian and Mudur, Nayantara and Plomecka, Martyna Beata and Raccuglia, Paul and Bahri, Yasaman and Albert, Victor V. and Srinivasan, Pranesh and Pan, Haining and Faist, Philippe and Rohr, Brian A and Statt, Michael J. and Morris, Dan and Purves, Drew and Kleeman, Elise and Alcantara, Ruth and Abraham, Matthew and Mohammad, Muqthar and VanLee, Ean Phing and Jiang, Chenfei and Dorfman, Elizabeth and Kim, Eun-Ah and Brenner, Michael and Ponda, Sameera S and Venugopalan, Subhashini},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/cui2025iclr-curie/}
}