Language Models Are Multilingual Chain-of-Thought Reasoners

Abstract

We evaluate the reasoning abilities of large language models in multilingual settings. We introduce the Multilingual Grade School Math (MGSM) benchmark, by manually translating 250 grade-school math problems from the GSM8K dataset (Cobbe et al., 2021) into ten typologically diverse languages. We find that the ability to solve MGSM problems via chain-of-thought prompting emerges with increasing model scale, and that models have strikingly strong multilingual reasoning abilities, even in underrepresented languages such as Bengali and Swahili. Finally, we show that multilingual reasoning abilities of language models extend to other tasks such as commonsense reasoning and word-in-context semantic judgment. The MGSM benchmark is publicly available at AnonymousLink and the supplementary material.

Cite

Text

Shi et al. "Language Models Are Multilingual Chain-of-Thought Reasoners." International Conference on Learning Representations, 2023.

Markdown

[Shi et al. "Language Models Are Multilingual Chain-of-Thought Reasoners." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/shi2023iclr-language/)

BibTeX

@inproceedings{shi2023iclr-language,
  title     = {{Language Models Are Multilingual Chain-of-Thought Reasoners}},
  author    = {Shi, Freda and Suzgun, Mirac and Freitag, Markus and Wang, Xuezhi and Srivats, Suraj and Vosoughi, Soroush and Chung, Hyung Won and Tay, Yi and Ruder, Sebastian and Zhou, Denny and Das, Dipanjan and Wei, Jason},
  booktitle = {International Conference on Learning Representations},
  year      = {2023},
  url       = {https://mlanthology.org/iclr/2023/shi2023iclr-language/}
}