From Classification to Generation: Insights into Crosslingual Retrieval Augmented ICL
Abstract
The remarkable ability of Large Language Models (LLMs) to understand and follow instructions has sometimes been limited by their in-context learning (ICL) performance in low-resource languages. To address this, we introduce a novel approach that leverages cross-lingual retrieval-augmented in-context learning (CREA-ICL). By extracting semantically similar prompts from high-resource languages, we aim to bolster the zero-shot performance of multilingual pretrained language models (MPLMs) across diverse tasks. Though our approach yields steady improvements in classification tasks, it faces challenges in generation tasks, with Bangla serving as a key case study. Our evaluation offers insights into the performance dynamics of retrieval-augmented in-context learning across both classification and generation domains.
Cite
Text
Li et al. "From Classification to Generation: Insights into Crosslingual Retrieval Augmented ICL." NeurIPS 2023 Workshops: Instruction, 2023.Markdown
[Li et al. "From Classification to Generation: Insights into Crosslingual Retrieval Augmented ICL." NeurIPS 2023 Workshops: Instruction, 2023.](https://mlanthology.org/neuripsw/2023/li2023neuripsw-classification/)BibTeX
@inproceedings{li2023neuripsw-classification,
title = {{From Classification to Generation: Insights into Crosslingual Retrieval Augmented ICL}},
author = {Li, Xiaoqian and Nie, Ercong and Liang, Sheng},
booktitle = {NeurIPS 2023 Workshops: Instruction},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/li2023neuripsw-classification/}
}