all AI news
Multilingual Language Model Adaptive Fine-Tuning: A Study on African Languages. (arXiv:2204.06487v1 [cs.CL])
cs.CL updates on arXiv.org arxiv.org
Multilingual pre-trained language models (PLMs) have demonstrated impressive
performance on several downstream tasks on both high resourced and
low-resourced languages. However, there is still a large performance drop for
languages unseen during pre-training, especially African languages. One of the
most effective approaches to adapt to a new language is language adaptive
fine-tuning (LAFT) -- fine-tuning a multilingual PLM on monolingual texts of a
language using the same pre-training objective. However, African languages with
large monolingual texts are few, and adapting …
arxiv fine-tuning language language model multilingual language model study