Fine-Tuning Multilingual Pretrained African Language Models

Abstract

With the recent increase in low-resource African language text corpora , there have been advancements which have led to development of multilingual pre-trained language models (PLMs), based on African languages. These PLMS include AfriBerta \citep{ogueji2021-afriberta}, Afro-XLMR \citep{alabi-etal-2022-adapting-afro-xlmr} and AfroLM \citep{afrolm} , which perform significantly well. The downstream tasks of these models range from text classification , name-entity-recognition and sentiment analysis. By exploring the idea of fine-tuning the different PLMs, these models can be trained on different African language datasets. This could lead to multilingual models that can perform well on the new data for the required downstream task of classification. This leads to the question we are attempting to answer: Can these PLMs be fine-tuned to perform similarly well on different African language data?

Cite

Text

Myoya et al. "Fine-Tuning Multilingual Pretrained African Language Models." ICLR 2023 Workshops: AfricaNLP, 2023.

Markdown

[Myoya et al. "Fine-Tuning Multilingual Pretrained African Language Models." ICLR 2023 Workshops: AfricaNLP, 2023.](https://mlanthology.org/iclrw/2023/myoya2023iclrw-finetuning/)

BibTeX

@inproceedings{myoya2023iclrw-finetuning,
  title     = {{Fine-Tuning Multilingual Pretrained African Language Models}},
  author    = {Myoya, Rozina Lucy and Banda, Fiskani and Marivate, Vukosi and Modupe, Abiodun},
  booktitle = {ICLR 2023 Workshops: AfricaNLP},
  year      = {2023},
  url       = {https://mlanthology.org/iclrw/2023/myoya2023iclrw-finetuning/}
}