Enhancing Amharic-Llama: Integrating Task Specific and Generative Datasets
Abstract
Large language models (LLMs) have received a lot of attention in natural language processing (NLP) research because of their exceptional performance in understanding and generating human languages. However, low-resource languages are left behind due to the unavailability of resources. In this work, we focus on enhancing the LLAMA-2-Amharic model by integrating task-specific and generative datasets to improve language model performance for Amharic. We compile an Amharic instruction fine-tuning dataset and fine-tuned LLAMA-2-Amharic model. The fine-tuned model shows promising results in different NLP tasks. We open-source our dataset creation pipeline, instruction datasets, trained models, and evaluation outputs to promote language-specific studies on these models.
Cite
Text
Azime et al. "Enhancing Amharic-Llama: Integrating Task Specific and Generative Datasets." ICLR 2024 Workshops: AfricaNLP, 2024.Markdown
[Azime et al. "Enhancing Amharic-Llama: Integrating Task Specific and Generative Datasets." ICLR 2024 Workshops: AfricaNLP, 2024.](https://mlanthology.org/iclrw/2024/azime2024iclrw-enhancing/)BibTeX
@inproceedings{azime2024iclrw-enhancing,
title = {{Enhancing Amharic-Llama: Integrating Task Specific and Generative Datasets}},
author = {Azime, Israel Abebe and Fuge, Mitiku Yohannes and Tonja, Atnafu Lambebo and Belay, Tadesse Destaw and Wassie, Aman Kassahun and Jada, Eyasu Shiferaw and Chanie, Yonas and Sewunetie, Walelign Tewabe and Yimam, Seid Muhie},
booktitle = {ICLR 2024 Workshops: AfricaNLP},
year = {2024},
url = {https://mlanthology.org/iclrw/2024/azime2024iclrw-enhancing/}
}