Language Models as Science Tutors
Abstract
NLP has recently made exciting progress toward training language models (LMs) with strong scientific problem-solving skills. However, model development has not focused on real-life use-cases of LMs for science, including applications in education that require processing long scientific documents. To address this, we introduce TutorEval and TutorChat. TutorEval is a diverse question-answering benchmark consisting of questions about long chapters from STEM textbooks, written by experts. TutorEval helps measure real-life usability of LMs as scientific assistants, and it is the first benchmark combining long contexts, free-form generation, and multi-disciplinary scientific knowledge. Moreover, we show that fine-tuning base models with existing dialogue datasets leads to poor performance on TutorEval. Therefore, we create TutorChat, a dataset of 80,000 long synthetic dialogues about textbooks. We use TutorChat to fine-tune Llemma models with 7B and 34B parameters. These LM tutors specialized in math have a 32K-token context window, and they excel at TutorEval while performing strongly on GSM8K and MATH. Our datasets build on open-source materials, and we release our models, data, and evaluations publicly.
Cite
Text
Chevalier et al. "Language Models as Science Tutors." International Conference on Machine Learning, 2024.Markdown
[Chevalier et al. "Language Models as Science Tutors." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/chevalier2024icml-language/)BibTeX
@inproceedings{chevalier2024icml-language,
title = {{Language Models as Science Tutors}},
author = {Chevalier, Alexis and Geng, Jiayi and Wettig, Alexander and Chen, Howard and Mizera, Sebastian and Annala, Toni and Aragon, Max and Fanlo, Arturo Rodriguez and Frieder, Simon and Machado, Simon and Prabhakar, Akshara and Thieu, Ellie and Wang, Jiachen T. and Wang, Zirui and Wu, Xindi and Xia, Mengzhou and Xia, Wenhan and Yu, Jiatong and Zhu, Junjie and Ren, Zhiyong and Arora, Sanjeev and Chen, Danqi},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {8310-8335},
volume = {235},
url = {https://mlanthology.org/icml/2024/chevalier2024icml-language/}
}