AngOFA: Leveraging OFA Embedding Initialization and Synthetic Data for Angolan Language Model
Abstract
In recent years, the development of pre-trained language models (PLMs) has gained momentum, showcasing their capacity to transcend linguistic barriers and facilitate knowledge transfer across diverse languages. However, this progress has predominantly bypassed the inclusion of very-low resource languages, creating a notable void in the multilingual landscape. This paper addresses this gap by introducing four tailored PLMs specifically finetuned for Angolan languages, employing a Multilingual Adaptive Fine-tuning (MAFT) approach. In this paper, we survey the role of informed embedding initialization and synthetic data in enhancing the performance of MAFT models in downstream tasks. We improve baseline over SOTA AfroXLMR-base (developed through MAFT) and OFA (an effective embedding initialization) by 12.3 and 3.8 points respectively.
Cite
Text
Quinjica and Adelani. "AngOFA: Leveraging OFA Embedding Initialization and Synthetic Data for Angolan Language Model." ICLR 2024 Workshops: AfricaNLP, 2024.Markdown
[Quinjica and Adelani. "AngOFA: Leveraging OFA Embedding Initialization and Synthetic Data for Angolan Language Model." ICLR 2024 Workshops: AfricaNLP, 2024.](https://mlanthology.org/iclrw/2024/quinjica2024iclrw-angofa/)BibTeX
@inproceedings{quinjica2024iclrw-angofa,
title = {{AngOFA: Leveraging OFA Embedding Initialization and Synthetic Data for Angolan Language Model}},
author = {Quinjica, Osvaldo Luamba and Adelani, David Ifeoluwa},
booktitle = {ICLR 2024 Workshops: AfricaNLP},
year = {2024},
url = {https://mlanthology.org/iclrw/2024/quinjica2024iclrw-angofa/}
}