Cross-Lingual QA: A Key to Unlocking In-Context Cross-Lingual Performance
Abstract
Multilingual large language models (MLLMs) have demonstrated significant cross-lingual capabilities through in-context learning. Existing approaches typically construct monolingual in-context examples, either in the source or target language. However, translating entire in-context examples into the target language might compromise contextual integrity and be costly in the case of long-context passages. To address this, we introduce Cross-lingual QA, a cross-lingual prompting method that translates only the question and answer parts, thus reducing translation costs. Experiments on four typologically diverse multilingual benchmarks show that Cross-lingual QA prompting effectively stimulates models to elicit their cross-lingual knowledge, outperforming prior monolingual prompting approaches. Furthermore, we show that prompting open-source MLLMs with cross-lingual in-context examples enhances performance as the model scale increases.
Cite
Text
Kim et al. "Cross-Lingual QA: A Key to Unlocking In-Context Cross-Lingual Performance." ICML 2024 Workshops: ICL, 2024.Markdown
[Kim et al. "Cross-Lingual QA: A Key to Unlocking In-Context Cross-Lingual Performance." ICML 2024 Workshops: ICL, 2024.](https://mlanthology.org/icmlw/2024/kim2024icmlw-crosslingual/)BibTeX
@inproceedings{kim2024icmlw-crosslingual,
title = {{Cross-Lingual QA: A Key to Unlocking In-Context Cross-Lingual Performance}},
author = {Kim, Sunkyoung and Ki, Dayeon and Kim, Yireun and Lee, Jinsik},
booktitle = {ICML 2024 Workshops: ICL},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/kim2024icmlw-crosslingual/}
}