April 14, 2022, 1:11 a.m. | Jesujoba O. Alabi, David Ifeoluwa Adelani, Marius Mosbach, Dietrich Klakow

cs.CL updates on arXiv.org arxiv.org

Multilingual pre-trained language models (PLMs) have demonstrated impressive
performance on several downstream tasks on both high resourced and
low-resourced languages. However, there is still a large performance drop for
languages unseen during pre-training, especially African languages. One of the
most effective approaches to adapt to a new language is language adaptive
fine-tuning (LAFT) -- fine-tuning a multilingual PLM on monolingual texts of a
language using the same pre-training objective. However, African languages with
large monolingual texts are few, and adapting …

arxiv fine-tuning language language model multilingual language model study

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Data Engineer

@ Kaseya | Bengaluru, Karnataka, India